San Francisco (AFP) – California, home to Silicon Valley, is eager to rein in the deployment of artificial intelligence and is looking to Europe's tough-on-big-tech approach for inspiration.
The richest state in the United States by GDP, California is a hotbed of no-holds-barred tech innovation, but lawmakers in state capital Sacramento want to give the industry laws and guardrails it has largely been spared in the internet age.
Brussels has enacted a barrage of laws on US-dominated tech and sprinted to pass the AI Act after OpenAI's Microsoft-backed ChatGPT arrived on the scene in late 2022, unleashing a global AI race.
"What we're trying to do is actually learn from the Europeans, but also work with the Europeans, and figure out how to put regulations in place on AI," said David Harris, senior policy advisor at the California Initiative for Technology and Democracy.
As they have in the past with EU laws on private data, lawmakers in California are looking to recent European legislation on AI, especially given the little hope of equivalent national legislation out of Washington.
There are at least 30 different bills proposed by California state legislators that relate to various aspects of AI, according to Harris, who said he has advised officials here and in Europe on such laws.
Proposed laws in California range from requiring AI makers to reveal what was used to train models to banning election ads containing any computer generated features.
"One of the aspects I think is really important is the question of how we deal with deepfakes or fake text created to look like a human being is sending you messages," Harris told AFP.
State assembly member Gail Pellerin is backing a bill she says would essentially ban the spreading of deceptive digital content created with generative AI in the months leading up to and the weeks following an election.
"Bad actors who are utilizing this are really hoping to create chaos in an election," Pellerin said.
Industry association NetChoice is dead set against importing aspects of European legislation on AI, or any other EU tech regulation.
"They are taking, essentially, a European approach on artificial intelligence - which is that we must ban the technology," said Carl Szabo, the general counsel of the association, which advocates for light touch regulation of tech.
"Outlawing AI won't stop (anything). It's bad because bad guys don't follow the law," Szabo argued.
"That's what makes them bad guys."
US computer software giant Adobe, like most tech giants, worked with Europe on the AI Act, according to Adobe General Counsel and Chief Trust Officer Dana Rao.
At the heart of the EU AI Act is a risk-based approach, with AI practices deemed more risky getting more scrutiny.
"We feel good about where the AI Act ended up" with its high-risk, low-risk approach, said Rao.
Already, Adobe engineers carry out "impact assessments" to rate risk before making AI products available, according to Rao.
"You want to think about nuclear safety, about cybersecurity, about when AI is making substantial decisions over human rights," Rao said.
In California, Rao said he expected the problem of deepfakes to be the first to fall under the authority of a new law.
Assembly Bill 602 would criminalize non-consensual deepfake pornography while Assembly Bill 730 bans the use of AI deepfakes during election campaign season.
To fight this, Adobe joined other companies to create "content credentials" that Rao equated to a "nutrition label" for digital content.
Assemblywoman Pellerin expects AI laws adopted in California to be replicated in other states.
"People are watching California," Pellerin said, with a slew of US states also working on their own AI deepfake bills.
"We're all in this together; we have to stay ahead of the folks that are trying to wreak havoc in an election," she said.
© 2024 AFP
Newer articles
<p>Israel’s defense minister recently hinted at plans to strike the country’s installations, calling them an “existential threat”</p>