Tech giants pledge crackdown on 2024 election AI deepfakes. Will they keep their promise?
Under pressure from the White House and governments around the globe, major technology companies pledged Friday to crack down on artificial intelligence-generated deepfakes that could undermine the integrity of major democratic elections in the U.S. and overseas this year.
Google, Meta, TikTok and other companies said they would join forces to create tools to detect and debunk election deepfakes. They unveiled the accord as political and security leaders gathered at the Munich Security Conference in Germany.
The deepfakes in question are videos, images, and audio that alter or fake the appearance, voice, or actions of political candidates, election officials or other key figures in a democratic election. These alterations can also be used to mislead voters about when, where and how to vote.
The coalition that includes Adobe, Amazon, Microsoft, OpenAI and X, formerly Twitter, promised to be open with the public about how it combats AI-generated falsehoods that attempt to disrupt elections on their platforms.
But battling bad actors will require "a whole-of-society response," the companies said. Virtually anyone can now create or digitally alter images and clips in realistic ways to target and deceive voters.
"We are committed to doing our part as technology companies while acknowledging that the deceptive use of AI is not only a technical challenge, but a political, social, and ethical issue and hope others will similarly commit to action across society," they said.
The accord is similar to a voluntary pledge many of the same companies signed in July after a meeting at the White House.
“This is an important step toward good platform hygiene and a better internet, but there is substantially more to do,” said Josh Lawson, director of AI and democracy at the nonprofit think tank Aspen Institute. “This has to be a floor, but not a ceiling, for future work.”
Pressure mounts on Big Tech to stop election deepfakes
This year will set a record for the largest number of people living in countries holding nationwide elections including India and Mexico, raising concerns that AI will play a role at the ballot box.
With rapid advances in technology and little oversight from the government, election experts are bracing for the malicious use of deepfakes in this year’s presidential contest.
During New Hampshire’s primary election last month, AI-generated robocalls mimicking President Joe Biden’s voice tried to discourage people from voting.
Biden signed an executive order on AI in October. But AI technologies are advancing so quickly that lawmakers and regulators are having trouble keeping up. With few rules governing AI-generated content in the U.S., the European Union has taken the lead. It requires that companies identify and label deepfakes.
“It is good that many of the large tech platforms have agreed to identify and label deepfakes but their action falls short of what is needed,” said Darrell West, a senior fellow at the Center for Technology Innovation at the think tank Brookings Institution. “We already are seeing fake videos and audiotapes and that usage is likely to grow as we get closer to the election.”
Tech industry accord is a ‘feeble response,’ critics say
Critics say tech companies cannot be trusted to regulate themselves and more needs to be done to hold the companies and their platforms accountable for content that aims to trick voters.
“This is a feeble response on the part of the tech companies given the risk we are already seeing in the form of political disinformation and election interference,” Hany Farid, a University of California, Berkeley professor who specializes in deepfakes and disinformation, said in an email.
Fake robocalls. Doctored videos. Why Facebook is being urged to fix its election problem.
Moreover, the agreement does not apply to other online platforms that are notorious for spreading election-related lies, Farid said.
“There are many bad players in this space who aren’t being invited to the White House for a photo op, and these players are not going to comply with voluntary standards,” he said.
Why deepfakes can be dangerous to democracy
The quality of deepfakes has rapidly improved, making them harder to distinguish from authentic videos, images and audio. The worry is that voters might not be able to tell the difference between what is real and what is AI. And that could lead people to question authentic content as well.
Lawson, who previously worked on election integrity and platform policies at Facebook owner Meta, expects AI-generated election misinformation and persuasion content will pop up in places where people don’t expect it, such as individual SMS messages or WhatsApp channels. These messages can deceive voters about polling places or target specific language communities to suppress voter turnout.
What is the government doing to combat election deepfakes?
There is no federal law banning deepfakes.
“While creating a deepfake to swing the outcome of an election may seem deeply unethical, it is currently legal in most places in the United States, and we should assume that as long as it is legal, it is a huge risk,” said Ilana Beller, organizing manager for the Public Citizen Democracy Campaign. “There is bipartisan support for this legislation on both a federal and a state level, but we cannot assume that Congress will act on this in time for the 2024 election.”
Some executive agencies have stepped in to fill policy gaps.
The Federal Election Commission is considering rule changes that would ban federal candidates from using generative AI tools to misrepresent their political rivals.
The Federal Communications Commission this month banned the use of AI-generated voices in robocalls. The ban came two days after the FCC issued a cease-and-desist order against the company responsible for the audio deepfake of Biden’s voice in New Hampshire.
The Federal Trade Commission this week moved to adopt new rules around impersonation, citing the threat of AI-generated scams.
What are states doing about election deepfakes?
A handful of states – Michigan, Minnesota, California, Washington, and Texas – already have laws in place to restrict AI in political communications.
During this legislative session, lawmakers in 32 states have introduced 52 bills to regulate deepfakes in elections, according to Public Citizen.
Some of the state bills ban AI-generated content in political ads or require disclaimers on AI-generated content. Others ban deepfakes before an election.
“We have seen legislators in almost every single state that has a legislative session this year working proactively to address this issue,” Beller said. “This is an issue that cuts across partisan and geographical differences. We are seeing bills sponsored by Republicans and Democrats alike, and a good deal of bipartisan cooperation to thwart this threat to our democracy.”
But there is a limit to how effective state regulation can be.
“The reality is that enforcement of state-level legislation is often less comprehensive than federal enforcement which can be tagged to things like trade sanctions and much more robust enforcement than state-level criminal penalties,” Lawson said.
This article originally appeared on USA TODAY: AI deepfakes threaten 2024 election. Tech giants pledge to take action