OpenAI is throwing its assist behind an Illinois state invoice that may protect AI labs from legal responsibility in instances the place AI fashions are used to trigger critical societal harms, similar to demise or critical damage of 100 or extra individuals or a minimum of $1 billion in property harm.
The trouble appears to mark a shift in OpenAI’s legislative technique. Till now, OpenAI has largely performed protection, opposing payments that might have made AI labs liable for his or her expertise’s harms. A number of AI coverage specialists inform WIRED that SB 3444—which may set a brand new normal for the trade—is a extra excessive measure than payments OpenAI has supported up to now.
The invoice would protect frontier AI builders from legal responsibility for “essential harms” brought on by their frontier fashions so long as they didn’t deliberately or recklessly trigger such an incident, and have printed security, safety, and transparency studies on their web site. It defines a frontier mannequin as any AI mannequin skilled utilizing greater than $100 million in computational prices, which seemingly may apply to America’s largest AI labs, like OpenAI, Google, xAI, Anthropic, and Meta.
“We assist approaches like this as a result of they give attention to what issues most: Lowering the chance of significant hurt from probably the most superior AI programs whereas nonetheless permitting this expertise to get into the arms of the individuals and companies—small and large—of Illinois,” stated OpenAI spokesperson Jamie Radice in an emailed assertion. “Additionally they assist keep away from a patchwork of state-by-state guidelines and transfer towards clearer, extra constant nationwide requirements.”
Underneath its definition of essential harms, the invoice lists just a few widespread areas of concern for the AI trade, similar to a foul actor utilizing AI to create a chemical, organic, radiological, or nuclear weapon. If an AI mannequin engages in conduct by itself that, if dedicated by a human, would represent a felony offense and results in these excessive outcomes, that may even be a essential hurt. If an AI mannequin have been to commit any of those actions beneath SB 3444, the AI lab behind the mannequin is probably not held liable, as long as it wasn’t intentional they usually printed their studies.
Federal and state legislatures within the US have but to go any legal guidelines particularly figuring out whether or not AI mannequin builders, like OpenAI, might be chargeable for these kinds of hurt brought on by their expertise. However as AI labs proceed to launch extra highly effective AI fashions that elevate novel security and cybersecurity challenges, similar to Anthropic’s Claude Mythos, these questions really feel more and more prescient.
In her testimony supporting SB 3444, a member of OpenAI’s World Affairs workforce, Caitlin Niedermeyer, additionally argued in favor of a federal framework for AI regulation. Niedermeyer struck a message that’s in line with the Trump administration’s crackdown on state AI security legal guidelines, claiming it’s essential to keep away from “a patchwork of inconsistent state necessities that might create friction with out meaningfully bettering security.” That is additionally in line with the broader view of Silicon Valley in recent times, which has usually argued that it’s paramount for AI laws to not hamper America’s place within the international AI race. Whereas SB 3444 is itself a state-level security regulation, Niedermeyer argued that these may be efficient in the event that they “reinforce a path towards harmonization with federal programs.”
“At OpenAI, we imagine the North Star for frontier regulation needs to be the secure deployment of probably the most superior fashions in a method that additionally preserves US management in innovation,” Niedermeyer stated.
Scott Wisor, coverage director for the Safe AI challenge, tells WIRED he believes this invoice has a slim likelihood of passing, given Illinois’ repute for aggressively regulating expertise. “We polled individuals in Illinois, asking whether or not they assume AI firms needs to be exempt from legal responsibility, and 90 p.c of individuals oppose it. There’s no cause current AI firms needs to be going through lowered legal responsibility,” Wisor says.
