Artificial Intelligence April 4, 2026

California imposes new AI regulations on businesses in "first-of-its-kind" executive order signed by Newsom

By Dr. Sarah Mitchell Technology Analyst
1415 words • 7 min read
California imposes new AI regulations on businesses in "first-of-its-kind" executive order signed by Newsom

AI-generated illustration: California imposes new AI regulations on businesses in "first-of-its-kind" executive order signed by Newsom

California's Pushback Against AI Risks

Gov. Gavin Newsom signed an executive order on March 30, 2026, intensifying California's scrutiny of artificial intelligence in state procurement. The directive requires AI companies seeking state contracts to certify protections against illegal content generation, harmful biases, civil rights violations and privacy breaches, including the creation of child sexual abuse material. This action, detailed in a press release on gov.ca.gov, directly challenges the Trump administration's preference for minimal federal oversight, establishing California as a leader in regulating generative AI tools integrated into public services.

Building on earlier efforts, the order follows a 2023 directive that set guardrails for AI use in state agencies and the Transparency in Frontier Artificial Intelligence Act (SB 53), signed Sept. 29, 2025. As reported by CalMatters, it responds to tensions such as the Department of Defense's designation of San Francisco-based Anthropic as a supply-chain risk. Newsom highlighted the contrast in the press release: "Unlike the Trump administration, California remains committed to ensuring that AI solutions adopted and deployed by [California] ... cannot be misused by bad actors."

State agencies, including the Government Operations Agency and the Department of Technology, have 120 days to develop vetting processes and standards for watermarking AI-generated content to combat misinformation from deepfakes and synthetic media. The order also promotes the adoption of vetted AI tools to improve services, such as assisting residents with job searches or business startups. More than 20 state agencies already use tools like the Poppy AI assistant, balancing innovation with strict controls.

Timeline of California's AI Governance

California's AI regulations have advanced steadily since 2023, when Newsom's first executive order directed agencies to implement ethical guardrails for generative AI while pursuing efficiency gains. As Statescoop reported, that initiative emphasized internal tools to streamline operations without ethical compromises. By Sept. 29, 2025, SB 53 required developers of large-scale frontier AI models—similar to those behind ChatGPT or Midjourney—to report on risk management, including bias mitigation and protections against discrimination or surveillance abuses.

The latest order, described by CBS News as a rebuttal to Trump-era deregulation, weaves these elements into procurement rules. It mandates certifications for handling illegal content, reducing biases and safeguarding privacy. Watermarking standards will embed imperceptible markers in AI-generated images and videos to verify authenticity and trace origins, aligning with global AI safety trends amid scrutiny of companies like Anthropic.

Key milestones include:
- 2023: Initial executive order on generative AI guardrails and efficiency tools for state agencies.
- Sept. 29, 2025: SB 53 signed, mandating transparency and risk reporting for frontier AI models.
- March 30, 2026: New order enforcing certifications for state contractors and watermarking protocols.

These developments underscore California's role as home to major AI firms and the world's fourth-largest economy, according to CalMatters and gov.ca.gov. The state has also passed laws on deepfakes, robocalls, child safety and performers' likeness protections, forming a comprehensive framework that diverges from lighter federal regulations.

Breaking Down Procurement Certifications

The executive order's certification process requires AI vendors to confirm their systems detect illegal content, such as child sexual abuse material, and mitigate biases that could cause civil rights violations. This involves algorithmic testing for discriminatory outputs, building on SB 53 standards for frontier models. The press release outlines privacy safeguards to prevent unauthorized surveillance or data exploitation.

Watermarking is a key technical requirement, with standards due within 120 days to embed metadata or patterns in synthetic media, countering deepfakes and misinformation. Techniques like steganography could hide verification data in images or videos without affecting quality. Unlike optional watermarks in tools like Midjourney, California's mandate applies to all state-contracted systems, potentially influencing broader adoption.

Specific certification elements include:
- Bias mitigation: Testing algorithms for discriminatory outputs, such as in job-matching tools.
- Illegal content safeguards: Filters to block exploitative material, compliant with child safety laws.
- Privacy protections: Data-handling rules with opt-out options for personal information.
- Watermarking: Traceable markers in synthetic media to reduce risks in services like homelessness support.

Enforcement relies on denying contracts rather than penalties, though Statescoop notes gaps in technical details for bias detection, limiting the order to state procurement.

Navigating Federal-State AI Conflicts

Newsom's order highlights the rift with the Trump administration's deregulation, which opposes state rules and favors light oversight in areas like Medicare AI use. CBS News portrays it as a counter to big tech lobbying and federal actions, including labeling Anthropic a supply-chain risk that could limit its government contracts. CalMatters links the order to this friction, positioning California's rules as a counterbalance to pressure firms toward stronger safeguards.

In practice, state agencies must favor vetted AI for tasks like employee support or public benefits navigation. The Poppy AI assistant, used by more than 20 departments, will need updated certifications. Newsom critiqued federal approaches in the press release, noting they operate "in the shadow of misuse."

Compared to other states, California's measures outpace mandates in Colorado, Texas and Utah, which lack similar certifications or watermarking. Globally, it echoes AI safety efforts, and with California's economic weight—hosting most major firms—it could drive national standards despite federal resistance.

Industry Ripples and Paths Forward

The order could transform AI development, compelling companies like Anthropic to enhance watermarking and bias detection for state contracts, which offer substantial revenue. This may spur investments in content filters, influencing product designs and reducing harmful outputs in public applications. Newsom emphasized in a CBS News quote: "California's always been the birthplace of innovation. But we also understand the flip side: in the wrong hands innovation can be misused in ways that put people at risk."

Limitations persist, as the rules apply only to state contractors, creating a bifurcated market where compliant firms gain government advantages. Building on SB 53, it promotes transparency through risk reporting and adversarial testing for deepfakes. With more than 20 agencies using AI, effective watermarking could inspire federal adoption amid ongoing tensions.

While a positive step, the order emphasizes symbolism over enforcement, with its 120-day timeline risking shallow implementation absent penalties or detailed specs. It bolsters California's leadership but requires interstate collaboration to offset federal deregulation. True advancement will depend on measurable results, potentially setting benchmarks for ethical AI nationwide if paired with rigorous follow-through.

🤖 AI-Assisted Content Notice

This article was generated using AI technology (grok-4-0709) and has been reviewed by our editorial team. While we strive for accuracy, we encourage readers to verify critical information with original sources.

Generated: April 4, 2026