A bill aimed at regulating powerful artificial intelligence models is under consideration in the California legislature, despite outcry that it could kill the technology it wants to control.
“With Congress deadlocked on AI regulation…California must act to overcome the foreseeable risks posed by rapidly advancing AI while encouraging innovation,” said Democratic Sen. Scott Wiener of San Francisco, sponsor of the bill.
But critics, including Democratic members of the US Congress, argue that threats of punitive measures against developers in a nascent field can stifle innovation.
“The view of many of us in Congress is that SB 1047 is well-intentioned but ill-informed,” said influential Rep. Nancy Pelosi of California, noting that top party members have shared their concerns with Wiener.
“While we want California to lead in artificial intelligence in a way that protects consumers, data, intellectual property and more, SB 1047 does more harm than good in that pursuit,” Pelosi said.
The US claims the software program raises rent prices
Pelosi pointed out that Stanford University computer science professor Fei-Fei Li, whom she called the “Godmother of artificial intelligence” for her status in the field, is among those who oppose the bill.
Harm or help?
The bill, called the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, won’t fix what it’s intended to fix and will “profoundly harm AI academia, small tech, and the open source community,” he wrote earlier this month in X. Little tech refers to startups and small companies, as well as researchers and entrepreneurs.
Weiner said the legislation aims to ensure the safe development of large-scale artificial intelligence models by establishing safety standards for system developers that cost more than $100 million to train.
The bill requires developers of large AI models to take precautions such as pre-deployment testing, simulating hacker attacks, installing cyber security safeguards, as well as providing whistleblower protections.
The combination of climate measures key to reducing emissions: study
Recent changes to the bill include replacing criminal penalties for violations with civil penalties such as fines.
Wiener argues that AI safety and innovation are not mutually exclusive, and that the amendments to the bill have addressed some of the concerns of critics.
OpenAI, the creator of ChatGPT, also opposed the bill, saying it would prefer national rules, fearing a chaotic patchwork of AI regulations across US states.
At least 40 states have introduced bills this year to regulate artificial intelligence, and half a dozen have adopted resolutions or enacted legislation targeting the technology, according to the National Conference of State Legislatures.
OpenAI said the California bill could also drive innovators out of the state, home to Silicon Valley.
But Anthropic, another AI maker that would potentially be affected by the measure, said that after some welcome tweaks, the bill has more benefits than flaws.
The bill also has high-profile supporters from the AI community.
South Korean pet care is high-tech with AI diagnostics
“Powerful artificial intelligence systems hold incredible promise, but the risks are also very real and must be taken extremely seriously,” said computer scientist Geoffrey Hinton, the “Godfather of Artificial Intelligence,” in a Fortune article cited by Wiener.
“SB 1047 takes a very sensible approach to balancing these concerns.”
Regulating artificial intelligence with “real teeth” is critical, and California is a natural place to start, as it has been a launch pad for the technology, according to Hinton.
Meanwhile, faculty and students at the California Institute of Technology are urging people to sign a letter opposing the bill.
“We believe this proposed legislation poses a significant threat to our ability to advance research by imposing burdensome and unrealistic regulations on the development of artificial intelligence,” CalTech professor Anima Anandkumar told X.
Source: AFP