RAISE Act Signed: NY Enacts Landmark AI Safety Law
New York has officially joined California in leading U.S. artificial intelligence regulation after Governor Kathy Hochul signed the RAISE Act into law on December 20, 2025. Designed to ensure public safety in an era of rapidly evolving AI, the law requires major developers to disclose safety protocols and report incidents within 72 hours—or face steep fines. With federal AI legislation stalled in Congress, New York’s move signals a growing state-led push for accountability in the tech sector.
What Is the RAISE Act—and Why It Matters
The Responsible AI for Safe and Ethical Use Act, or RAISE Act, establishes one of the nation’s toughest regulatory frameworks for high-risk AI systems. It targets companies developing large-scale models used in finance, healthcare, hiring, and public infrastructure—sectors where errors or bias could cause real-world harm. Under the law, developers must document testing procedures, risk mitigation strategies, and data sources, then submit this information to a newly formed AI oversight unit within New York’s Department of Financial Services (DFS).
This isn’t just about transparency—it’s about enforceable accountability. The RAISE Act empowers state regulators to audit AI systems and penalize noncompliance, filling a critical gap as federal agencies remain divided on how to approach AI governance.
From Lobbying Battles to Legislative Victory
The path to Hochul’s signature wasn’t smooth. After the state legislature passed the original RAISE Act in June 2025, tech industry lobbyists pushed hard for significant rollbacks, citing concerns about innovation and compliance costs. Hochul initially proposed amendments to soften reporting mandates and delay enforcement. But in a surprising turn, she reversed course and signed the original bill—while securing a legislative agreement to revisit her suggested changes in 2026.
State Senator Andrew Gounardes, a key sponsor, celebrated the outcome as a win for public interest over corporate influence. “Big Tech thought they could weasel their way into killing our bill,” he declared on social media. “We shut them down and passed the strongest AI safety law in the country.”
Heavy Fines for AI Safety Violations
The RAISE Act doesn’t play nice with noncompliance. Companies that fail to report safety incidents—such as system breaches, discriminatory outputs, or uncontrolled autonomous behavior—within 72 hours can be fined up to $1 million. Repeat offenders face penalties of up to $3 million per violation.
These aren’t symbolic slaps on the wrist. In a market where AI models power everything from loan approvals to medical diagnostics, the financial risk now matches the societal stakes. Regulators hope the threat of multimillion-dollar fines will compel companies to prioritize safety audits and incident response plans long before deployment.
A New AI Watchdog Within DFS
To enforce the law, New York is creating a dedicated AI Safety Office inside the Department of Financial Services—one of the nation’s most powerful financial regulators. The office will staff technical experts in machine learning, cybersecurity, and ethics, giving it the muscle to assess complex AI systems rather than relying solely on corporate self-reporting.
DFS Superintendent Adrienne Harris emphasized the unit’s mission: “We’re not here to stifle innovation. We’re here to ensure that when AI touches people’s lives—through credit, insurance, or employment—it does so fairly, safely, and transparently.”
Building on California’s AI Safety Blueprint
Governor Hochul didn’t act in isolation. She explicitly framed the RAISE Act as a complement to California’s AI safety law, signed by Governor Gavin Newsom in September 2025. Both laws share core principles: mandatory incident reporting, third-party audits, and protections against deceptive AI practices like deepfakes used in fraud.
By aligning standards across the two largest tech economies in the U.S., New York and California are creating de facto national norms that could pressure other states—and eventually Congress—to follow suit. “This law builds on California’s recently adopted framework,” Hochul said, “creating a unified benchmark among the country’s leading tech states as the federal government lags behind.”
Tech Giants Offer Cautious Support
Surprisingly, leading AI developers welcomed the legislation—even as they called for broader federal action. OpenAI and Anthropic both issued statements supporting New York’s approach, with Anthropic’s head of external affairs, Sarah Heck, telling The New York Times: “The fact that two of the largest states in the country are setting clear expectations gives developers stability and helps avoid a patchwork of conflicting rules.”
Still, both companies stressed that state-by-state regulation isn’t sustainable long-term. Their message to Washington is clear: pass a federal AI safety law before the regulatory landscape becomes unmanageable.
What This Means for Consumers and Developers
For everyday New Yorkers, the RAISE Act means greater protection against AI-driven harm—whether it’s being denied a job by a biased algorithm or receiving inaccurate medical advice from a chatbot. Developers, meanwhile, now face a new operational reality: safety isn’t optional, and transparency is mandatory.
Startups may worry about compliance burdens, but the law includes carve-outs for small businesses and open-source projects that don’t deploy high-risk systems at scale. The focus remains squarely on the biggest players—the ones with the deepest pockets and the widest reach.
A Turning Point for U.S. AI Policy
New York’s move marks a pivotal moment in America’s AI governance journey. With Congress gridlocked and federal agencies still drafting guidelines, states are stepping into the void. The RAISE Act proves that robust, enforceable AI regulation isn’t just possible—it’s already happening.
More importantly, it challenges the narrative that safety and innovation must be at odds. By demanding accountability upfront, New York is signaling that responsible AI isn’t a barrier to progress—it’s the foundation of public trust.
What’s Next for AI Regulation in 2026
All eyes now turn to 2026, when New York lawmakers will revisit Hochul’s proposed amendments and potentially refine the law’s scope. Meanwhile, states like Colorado, Illinois, and Washington are drafting similar bills, inspired by the California–New York model.
If this trend continues, the U.S. could see a wave of state-level AI laws that collectively shape national standards—forcing Washington’s hand. For now, New York has drawn a line: AI advancement must not come at the cost of public safety. And with the RAISE Act, that line is now law.