Tech & Startup

New York passes AI safety bill to prevent "catastrophic risks to humanity"

AI safety
The bill mandates that AI companies spending over $100 million to train advanced models—such as OpenAI, Google, and Anthropic—must publish safety protocols assessing risks. Illustration: Zarif Faiaz

New York state lawmakers have recently approved a first-of-its-kind bill requiring major AI developers to implement safety measures preventing catastrophic misuse of their technology. 

Called the New York RAISE Act, this bill mandates that AI companies spending over $100 million to train advanced models—such as OpenAI, Google, and Anthropic—must publish safety protocols assessing risks like automated crime, bioweapons development, or large-scale harm. Firms must also report serious incidents, including model theft or dangerous behaviour, with penalties of up to $30 million for non-compliance.  

Sponsored by Senator Andrew Gounardes and Assemblymember Alex Bores, the RAISE Act avoids regulating smaller startups, focusing instead on "frontier AI" systems with the potential for widespread damage. Unlike California's vetoed SB 1047, it omits controversial provisions like mandatory "kill switches" but empowers New York's attorney general to enforce transparency rules.  

Critics, including Anthropic co-founder Jack Clark, warn the bill could still burden smaller firms, while others fear companies may restrict advanced AI access in New York—a pattern seen under the EU's strict tech laws. Supporters, however, argue the law is a necessary step as AI capabilities outpace oversight.

As per a report by TechCrunch on the matter, the bill now awaits Governor Kathy Hochul's signature to become law.

Comments