Artificial Intelligence February 3, 2026

New York Governor Signs Sweeping AI Safety Law: What Businesses Can Do in 2026 to Prepare For a New Era | Fisher Phillips

By Battery Wire Staff
951 words • 5 min read
New York Governor Signs Sweeping AI Safety Law: What Businesses Can Do in 2026 to Prepare For a New Era | Fisher Phillips

Photo by fabio on Unsplash

New York's Bold Move on AI Regulation

New York Gov. Kathy Hochul signed the Responsible AI Safety and Education Act, known as the RAISE Act, into law on Dec. 19, 2025. The measure imposes safety, transparency and reporting requirements on developers of advanced "frontier" AI models. Hochul approved the bill eight days after President Trump's executive order challenging state-level AI regulations, according to Alston & Bird. The law targets systems with potential for catastrophic risks, such as cyberattacks or bioweapon development.

The RAISE Act establishes obligations for "large developers" of AI models trained with more than 10^26 floating point operations and over $100 million in compute costs. It requires safety assessments, incident reporting within 72 hours and ongoing oversight. Enforcement falls to the New York attorney general, with penalties up to $1 million for initial violations and $3 million for subsequent ones, per Morrison & Foerster. The law takes effect in 2026, though sources differ on the exact date.

This state action comes amid federal tensions, as Trump's orders aim to curb such regulations. States like New York are stepping in to address AI safety gaps left by congressional inaction.

Key Provisions and Enforcement Details

The RAISE Act focuses on frontier AI models posing high risks. Developers must create safety and risk assessment plans, report incidents quickly, maintain oversight and ensure transparency, according to Fisher Phillips. These form the "Core 4" requirements.

Key elements include:

  • Safety protocols that developers must publish.
  • Reporting of critical incidents, such as those causing imminent catastrophic harm, within 72 hours.
  • Oversight by a new office under the Department of Financial Services, which handles fees, audits and rules.
  • Exemptions for accredited universities and entities transferring full intellectual property rights.

Thresholds for "large developers" vary across reports. Alston & Bird and Harris Beach Murtha cite over $100 million in aggregate compute costs for training frontier models. Jones Walker and Skadden report a revenue-based trigger of more than $500 million in annual revenue after January 2026 amendments. The law applies to models developed, deployed or operated in New York.

Penalties also show discrepancies. Alston & Bird notes up to $10 million for initial violations and $30 million for subsequent ones. However, Harris Beach Murtha, Morrison & Foerster and Skadden align on $1 million initial and $3 million subsequent, closer to California's model. The attorney general holds exclusive enforcement power, with no private right of action, and injunctive relief and veil-piercing for bad-faith corporate structuring are available, per Harris Beach Murtha.

Parallels and Differences with California's Law

New York's RAISE Act builds on California's Transparent Frontier AI Act, or TFAIA, signed on Sept. 29, 2025, but introduces stricter elements. Both laws regulate frontier AI models with similar definitions: systems exceeding 10^26 FLOPs and $100 million in compute costs.

Differences emerge in reporting and timelines. New York requires 72-hour incident reporting for critical safety events, stricter than California's 15-day window or 24-hour rule for imminent risks, according to Alston & Bird and Cleary Cyberwatch. California's TFAIA takes effect on Jan. 1, 2026, per multiple sources.

Penalties in California cap at $1 million per violation, aligning with the majority view of New York's structure. Developer thresholds show parallels, with California using compute costs and revenue metrics similar to New York's reported amendments.

Broader contrasts include New York's focus on a dedicated oversight office for audits and fees, not explicitly detailed in California's law. Both states offer exemptions for certain academic and IP-transfer scenarios, and enforcement rests solely with state attorneys general, without private lawsuits. The Future of Privacy Forum notes that New York's act emphasizes catastrophic risks more narrowly than California's, which includes broader transparency measures, while Data Innovation describes the laws as contributing to regulatory fragmentation despite claims of national alignment.

Impacts on Developers and Federal Context

The RAISE Act impacts major companies, with Alston & Bird naming OpenAI with ChatGPT, Anthropic with Claude, Google with Gemini, Microsoft with Copilot and Meta with LLaMA as current targets. These firms face compliance costs for risk assessments and reporting in New York operations.

The law arrives amid federal pushback, including President Trump's Executive Order 14179 in January 2025 and a December 2025 order aimed to curb state regulations, according to Fisher Phillips. Congress dropped provisions in the National Defense Authorization Act that would have blocked such laws.

Globally, the act aligns with the EU AI Act, effective February 2025, per Pillsbury. States like New York and California fill a void left by congressional inaction on AI safety, focusing on existential risks rather than bias or employment issues. Data Innovation critiques this as creating a patchwork that burdens multi-state businesses.

"In-house legal teams should begin aligning development, compliance and incident response functions now to ensure readiness ahead of the law’s 2026 effective date," Harris Beach Murtha stated.

Steps for Compliance and Future Outlook

Businesses can prepare by reviewing AI development pipelines for frontier model thresholds, Fisher Phillips recommends. Actionable measures include:

  • Conducting internal audits of compute costs and revenue to determine applicability.
  • Developing safety and risk assessment plans, including protocols for catastrophic scenarios.
  • Establishing 72-hour incident reporting systems, stricter than California's requirements.
  • Training teams on transparency obligations and preparing for potential audits by the oversight office.

Effective dates remain uncertain, with Fisher Phillips reporting Jan. 1, 2027, while Alston & Bird specifies March 19, 2026. Companies should monitor final amendments from January 2026 for clarity.

The RAISE Act signals growing state-level oversight, with New York joining California at the forefront of U.S. artificial intelligence regulation. "New York has officially joined California at the forefront of US artificial intelligence regulation... marks one of the most consequential state AI safety laws enacted to date," Fisher Phillips stated. As federal efforts seek uniformity, developers must adapt to this evolving landscape, potentially shaping national standards for AI safety.

🤖 AI-Assisted Content Notice

This article was generated using AI technology (grok-4-0709) and has been reviewed by our editorial team. While we strive for accuracy, we encourage readers to verify critical information with original sources.

Generated: January 12, 2026