California legislators just sent Gov. Gavin Newsom more than a dozen bills regulating artificial intelligence, testing for threats to critical infrastructure, curbing the use of algorithms on children, limiting the use of deepfakes, and more.
But people in and around the AI industry say the proposed laws fail to stop some of the most worrisome harms of the technology, like discrimination by businesses and government entities. At the same time, the observers say, whether passed bills get vetoed or signed into law may depend heavily on industry pressure, in particular accusations that the state is regulating itself out of competitiveness in a hot field.
Debates over the bills, and decisions by the governor on whether to sign each of them, are particularly important because California is at the epicenter of AI development, with many legislators making pledges this year to regulate the technology and put the state at the forefront of protecting people from AI around the world.
Without question, Senate Bill 1047 got more attention than any other AI regulation bill this year — and after it passed both chambers of the legislature by wide margins, industry and consumer advocates are closely watching to see whether Newsom signs it into law.
Introduced by San Francisco Democratic Sen. Scott Wiener, the bill addresses huge potential threats posed by AI, requiring developers of advanced AI models to test them for their ability to enable attacks on digital and physical infrastructure and help non-experts make chemical, biological, radioactive, and nuclear weapons. It also protects whistleblowers who want to report such threats from inside tech companies.
But what if the most concerning harms from AI are commonplace rather than apocalyptic? That’s the view of people like Alex Hanna, head of research at Distributed AI Research, a nonprofit organization created by former Google ethical AI researchers based in California. Hanna said 1047 shows how California lawmakers focused too much on existential risk and not enough on preventing specific forms of discrimination. She would much rather lawmakers consider banning the use of facial recognition in criminal investigations since that application of AI has already been shown to lead to racial discrimination. She would also like to see government standards around potentially discriminatory technology adopted by contractors.
“I think 1047 got the most noise for God knows what reason but they’re certainly not leading the world or trying to match what Europe has in this legislation,” she said of California’s legislators.
Bill against AI discrimination is stripped
One bill that did address discriminatory AI was gutted and then shelved this year. Assembly Bill 2930 would have required AI developers perform impact assessments and submit them to the Civil Rights Department and would have made use of discriminatory AI illegal and subject to a $25,000 fine for each violation.
The original bill sought to make use of discriminatory AI illegal in key sectors of the economy including housing, finance, insurance, and health care. But author Rebecca Bauer-Kahan, a San Ramon Democrat, yanked it after the Senate Appropriations Committee limited the bill to assessing AI in employment. That sort of discrimination is already expected to be curbed by rules that the California Civil Rights Department and California Privacy Protection Agency are drafting. Bauer-Kahan told CalMatters she plans to put forward a stronger bill next year, adding, “We have strong anti-discrimination protections but under these systems we need more information.”
Like Wiener’s bill, Bauer-Kahan’s was subject to lobbying by opponents in the tech industry, including Google, Meta, Microsoft and OpenAI, which hired its first lobbyist ever in Sacramento this spring. Unlike Wiener’s bill, it also attracted opposition from nearly 100 companies from a wide range of industries, including Blue Shield of California, dating app company Bumble, biotech company Genentech, and pharmaceutical company Pfizer.
The failure of the AI discrimination bill is one reason there are still “gaping holes” in California’s AI regulation, according to Samantha Gordon, chief program officer at TechEquity, which lobbied in favor of the bill. Gordon, who co-organized a working group on AI with privacy, labor, and human rights groups, believes the state still needs legislation to address “ discrimination, disclosure, transparency, and which use cases deserve a ban because they have demonstrated an ability to harm people.”
Still, Gordon said, the passage of Wiener’s bill marked important progress, as did the passage of Senate Bill 892, which sets the standards for contracts government agencies sign for AI services. Doing so, author and Chula Vista Democratic Sen. Steve Padilla told CalMatters earlier this year, leverages the government’s buying power to encourage safer and more ethical AI services.
While some experts criticized Wiener’s bill for what it failed to do, the tech industry has gone after it for what it does. The measure’s testing requirements and associated enforcement mechanisms will kneecap fast-moving tech companies and create a chilling effect on code sharing that inhibits innovation, big tech companies like Google and Meta have said.
Given the industry’s power in California, this criticism is the proverbial elephant in the room, said Joep Meindertsma, CEO of Pause.ai. Pause.ai is a proponent of regulating AI, endorsing Wiener’s bill and even organizing protests at the offices of California-based companies including Meta and OpenAI. So Meindertsma was happy to see so many regulatory bills clear the legislature this year. But he worries they will be undermined by the tension between a desire to regulate AI and a desire to win the race — among not jut companies but entire countries — to have the best AI. Regulators in California and elsewhere, he said, want to have it both ways.
“The market dynamic between countries that are trying to stay ahead of the competition, trying to avoid regulating their companies too much over fear of slowing down while the others keep racing, that dynamic is the issue that I feel is the most toxic in the entire situation,” he said.
Learn more about legislators mentioned in this story.
Bill Dodd
Democrat, State Senate, District 3 (Napa)
Scott Wiener
Democrat, State Senate, District 11 (San Francisco)
Josh Becker
Democrat, State Senate, District 13 (Menlo Park)
Steve Padilla
Democrat, State Senate, District 18 (Chula Vista)
Buffy Wicks
Democrat, State Assembly, District 14 (Oakland)
Rebecca Bauer-Kahan
Democrat, State Assembly, District 16 (San Ramon)
There are already signs that industry pressure could prevail, at least against Wiener’s bill.
Several Democratic members of California’s Congressional delegation have called on Newsom to veto the bill. Former House Speaker Nancy Pelosi, who represents San Francisco, has also come out against it.
In recent weeks, Newsom seems to have leaned into AI, raising questions over how much appetite he has to regulate it. The governor showed great interest in using AI to solve problems in the state of California, signing an agreement with AI powerhouse Nvidia last month, launching an AI for tax advice pilot program in February, and on Thursday introducing an AI solution aimed at connecting homeless people with services. When asked directly about Wiener’s bill in May, Newsom equivocated, saying that lawmakers must strike a balance between responding to calls for regulation and overdoing it.
The sleeper hits of this year’s AI legislation
Some bills that were more targeted — and significantly less publicized — than Wiener’s 1047 did find success in the legislature.
SB 942, would require companies to supply AI detection tools at no charge to the public so they can tell the difference between AI and reality. It was introduced by Democratic Sen. Josh Becker of Menlo Park.
SB 896 by Democratic Sen. Bill Dodd of Napa would force government agencies to assess the risk of using generative AI and disclose when the technology is used.
Other AI bills passed this legislative session are designed to protect children, including one that makes it a crime to create child pornography with generative AI and another that requires the makers of social media apps to turn off algorithmic curation of content to users under age 18 unless they get permission from a parent or guardian. Children would instead by default see a chronological stream of recent posts from accounts they follow. The bill also limits notifications from social media apps during school hours and between midnight and 6 am.
A trio of bills passed last week aim to protect voters from deceptive audio, imagery, and video known as deepfakes. One bill goes after individuals who create or publish deceptive content made with AI and allows a judge to order an injunction requiring them to either take down the content or pay damages. Another bill requires large online platforms such as Facebook to remove or label deepfakes within 72 hours of a user reporting it, while yet another requires political campaigns to disclose use of AI in advertising.
Also on Newsom’s desk are bills that would require creatives to get permission before using the likeness of a dead person and prohibit use of digital replicas in some instances. Both of those bills were supported by the actors union SAG-AFTRA.
Which bills didn’t pass
In lawmaking what fails to pass, like Bauer-Kahan’s anti AI discrimination bill, is often just as important as what advances.
Case in point: AB 3211, which would have required AI makers to label AI-generated content. It sputtered out despite support from companies including Adobe, Microsoft, and OpenAI. In a statement shared in social media on Tuesday, bill author Democratic Assemblymember Buffy Wicks of Oakland said it’s unfortunate that the California Senate did not take up her bill that “was model policy for the rest of the nation.” She said she plans to reintroduce it next year.
The labeling bill and Bauer-Kahan’s bill are two of three measures flagged as key by European Union officials who advised California lawmakers behind the scenes to adopt AI regulation in line with the EU’s AI Act, which took five years to create and went into effect this spring. Gerard de Graaf, director of the San Francisco EU office, visited the California Legislature to visit with authors of AB 3211, AB 2930, and SB 1047 in pursuit of the goal of aligning regulation between Sacramento and Brussels.
In an interview with CalMatters this spring, de Graaf said those three laws would accomplish the majority of what the AI Act seeks to do. This week, de Graaf had high praise for his California counterparts, saying he thinks state lawmakers did some serious work to pass so many different AI regulation bills, that they’re at the top of their game, and that they succeeded in being a world leader in AI regulation this year.
“This requires a thorough understanding and that’s not present in many legislatures around the world and in that sense California is a leader,” he said. “The fact that California achieved as much as it did in a year is not an insignificant feat and this will presumably continue.”
Despite advising lawmakers about two bills that failed to pass the possibility of Senate Bill 1047 facing a veto, de Graaf said he sees convergence with EU AI policy in the passage of a bill that requires AI developers to disclose information about datasets used to train AI models.
The fact that the bill meant to protect citizens from discriminatory AI didn’t pass is a really disappointing reflection of the power of tech capital in California politics, said UC Irvine School of Law professor Veena Dubal, whose research has dealt with technology and marginalized workers.
“It really feels like our legislature has been captured by tech companies who by their very structure don’t have the interest of the public at the forefront of their own advocacy or decision making, because they’re profit making machines,” she said.
She thinks events of the past legislative session show that California will not be a leader in regulating generative AI because the power of tech companies is too unwieldy, but she does see signs of promise in bills passed to protect kids from AI. She’s encouraged by digital replica bills supported by SAG-AFTRA passed, a reflection of worker strikes in 2022, and that lawmakers made clear that using generative AI to make child pornography and curate content for kids without parental consent should be illegal. What’s more challenging it seems is passing laws that require any degree of accountability. It shouldn’t be debatable whether people deserve protections from civil rights violations, and she wants lawmakers to label other uses of AI unacceptable, like using AI to evaluate people in the workplace.
“The fact that those laws (protecting kids) passed isn’t surprising, and my hope is that their passage paves a way for stopping or banning use of AI or automated decisionmaking in other areas of our lives in which it is clearly already wrecking harm,” she said.