- Some in Silicon Valley have long argued that AI needs rules.
- A new bill in California that introduces such rules is splitting opinion, however.
Silicon Valley seems to agree that artificial intelligence is the future. An imminent bill that could dictate AI development, however, is revealing deep schisms among tech leaders over the way the technology should be rolled out to the world.
SB 1047, an AI safety bill tabled in California seven months ago by Sen. Scott Weiner, is set to face a vote in the California State Assembly this week, but not everyone with a position in the AI race appears to be on the same page.
The core aim of the bill passed by the state's Senate in a bipartisan vote in May is to introduce a set of measures that, in theory at least, curb risks posed by the most powerful AI models, such as the potential to be used to create dangerous weapons or cyber-attacks.
Such measures include forcing companies operating in California to allow third parties to test their models for safety and pushing them to include a "kill switch" that could shut down their models if deemed necessary.
Many with a horse in the AI race have already vocalized their frustrations over the bill.
In a letter to Sen. Weiner's office last week, Sam Altman's OpenAI expressed strong disapproval of the bill. Leaders at venture capital firms such as A16z and leading AI scientists such as former Googler Andrew Ng have also raised concerns.
The effort to protect innovation and open source continues. I believe we’re all better off if anyone can carry out basic AI research and share their innovations. Right now, I’m deeply concerned about California's proposed law SB-1047. It’s a long, complex bill with many parts… pic.twitter.com/VUpKh4vhrg
— Andrew Ng (@AndrewYNg) June 6, 2024
'Tough call'
Others appear to be in favor.
In a social media post on Monday, Elon Musk, who founded AI company xAI last year and has a long-standing rivalry with OpenAI's Altman, said that although it was "a tough call and will make some people upset," he thinks "California should probably pass the SB 1047 AI safety bill."
"For over 20 years, I have been an advocate for AI regulation, just as we regulate any product/technology that is a potential risk to the public," Musk wrote on X.
Meanwhile, OpenAI rival Anthropic appeared to emerge as a supporter of the bill last week, as its CEO, Dario Amodei, wrote in a letter to California Gov. Gavin Newsom that the "benefits likely outweigh the costs."
The bill has proven to be so divisive for a number of reasons.
Why Silicon Valley is split on SB 1047
First, there is the question of its impact on innovation.
According to the anti-SB 1047 camp, the bill's stringent measures threaten to slow the development of future models, something that could prove dangerous as the US faces a rising AI power in the form of China.
That, at least, was the argument put forward by OpenAI's chief strategy officer, Jason Kwon, in the company's letter to Sen. Weiner. Not only would the bill "slow the pace of innovation," Kwon said, but it could trigger an exodus of "California's world-class engineers and entrepreneurs."
The regulation's impact on AI models that aren't among the most powerful and possible consequences for startups is also raising questions.
Legal 'patchwork'
A16z general partner Anjney Midha, for instance, wrote in The Financial Times last month that the bill's categorization of powerful models as anything with a training cost of more than $100 million set a "relatively low bar" given that "AI development costs run to billions."
There is also a view that the federal government should take the lead on regulations of this magnitude. OpenAI's Kwon, for instance, suggested that governance of AI at a state level could create a "patchwork" of laws.
Sen. Weiner, of course, dismisses this idea. Responding to OpenAI's letter, he said that "ideally Congress would handle this," but its lack of action to date makes him skeptical that it will take action anytime soon.
Of course, AI companies will know there is an urgency to ensure the tech is rolled out safety. Earlier this month, OpenAI revealed that it had to take down "a cluster of ChatGPT accounts that were generating content for a covert Iranian influence operation" called Storm-2035.
Its purpose, OpenAI said, was to generate content and commentary on topics such as the US presidential election.
That is just one illustration of AI's nefarious potential. Whether SB 1047 is the right way to address that threat will be a subject of fierce debate in the coming months.