News

Industries

Companies

Jobs

Events

People

Video

Audio

Galleries

My Biz

Submit content

My Account

Advertise with us

California AG puts AI firms on notice, and we know DeepSeek’s secret

The AI landscape in the US is under siege from domestic legal and regulatory challenges, particularly in California, where existing consumer protection, civil rights, competition, and data privacy laws are being applied to AI technologies. Meanwhile, on the foreign front AI research continues to push the boundaries of efficiency, with recent findings unlocking some of the secrets behind DeepSeek’s meteoric rise and shedding light on optimal AI model sparsity for improved performance.
California AG puts AI firms on notice, and we know DeepSeek’s secret

The California Attorney General’s Office has issued a legal advisory outlining how the state’s existing laws apply to AI, reinforcing the need for transparency, accountability, and compliance among AI developers and users.

This advisory warns that AI systems must not exacerbate bias, facilitate fraud, or contribute to misinformation.

It also stresses that companies deploying AI tools must ensure their systems are tested and validated to meet ethical and legal standards.

The advisory highlights several key risks associated with AI, including:

Bias and discrimination: AI systems used for credit assessments, hiring, and medical diagnostics must not introduce or amplify biases against protected groups.

Fraud and deception: The use of AI in deepfakes, voice cloning, and misleading chatbot interactions is subject to scrutiny under California’s unfair competition and false advertising laws.

Consumer data protection: AI systems must comply with the California Consumer Privacy Act (CCPA), which grants residents rights over how their data is collected, stored, and shared.

Election misinformation: AI-generated content designed to mislead voters or impersonate political candidates is explicitly prohibited under state law.

These challenges highlight the need for AI firms operating in the US to navigate an increasingly complex legal landscape while maintaining compliance with state and federal regulations.

Unlocking DeepSeek’s secrets

While regulatory concerns grow in the US, AI researchers led by Apple are making strides in optimising model efficiency.

A recent study on sparse Mixture-of-Experts (MoE) language models found that increasing model sparsity leads to significant improvements in training efficiency and performance – which is a key component of DeepSeek’s success.

Key findings from the research include:

Optimal sparsity enhances training: The study showed that reducing the number of active parameters in MoE models while maintaining a high total parameter count can lead to better pretraining performance at a lower computational cost.

Trade-offs in inference: While sparser models perform well during training, their efficiency at inference time may vary depending on the complexity of downstream tasks.

Dense models still outperform sparse ones in tasks requiring deep reasoning.

Scaling laws for sparsity: The study proposed new scaling laws for AI training that balance parameter count and computational cost, providing insights for designing more efficient large-scale models.

These findings point to a possible AI future where developers could achieve more with fewer resources by adopting optimal sparsity strategies, a crucial insight as computational demands for training large AI models continue to escalate.

Sparsity could substantially lower the cost of access to advanced AI models and undercut the established offerings from the likes of OpenAI, Meta and Microsoft.

About Lindsey Schutters

Lindsey is the editor for ICT, Construction&Engineering and Energy&Mining at Bizcommunity
Let's do Biz