Advertisement
Advertisement
Advertisement
The Hill

Elon Musk takes fire for posting fake video of Kamala Harris

Nick Robertson
3 min read
Generate Key Takeaways

Elon Musk is being accused of violating the policies on his own social platform, X, after he shared a fake video of Vice President Harris that uses an artificial intelligence (AI) voice mimicking Harris to spew insults about her campaign and President Biden.

The video musk shared Friday mocks a Harris campaign ad and features voiceover calling Biden “senile” and Harris the “ultimate diversity hire.”

The video does not contain any disclaimer that it uses AI to mimic Harris’s voice, though the original post from the account @MrReaganUSA labels it a parody. Musk made no such distinction in his own post, a move that appears to violate X site policy barring “misleading media.”

Advertisement
Advertisement

The Harris campaign knocked Musk over the video.

“We believe the American people want the real freedom, opportunity, and security Vice President Harris is offering; not the fake, manipulated lies of Elon Musk and Donald Trump,” campaign spokesperson Mia Ehrenberg said in an email.

California Gov. Gavin Newsom (D) also called out the post, saying such videos should be against the law.

“Manipulating a voice in an ‘ad’ like this one should be illegal,” he wrote on X. “I’ll be signing a bill in a matter of weeks to make sure it is.”

Musk hit back at Newsom’s promise to ban the videos in a crude response early Monday morning.

“I checked with renowned world authority, Professor Suggon Deeznutz, and he said parody is legal in America,” Musk wrote.

Advertisement
Advertisement

The video hits on a number of attack lines against Harris.

“I, Kamala Harris, am your Democrat candidate for president because Joe Biden finally exposed his senility at the debate,” the mock Harris voice says in the video. “I was selected because I am the ultimate diversity hire. I’m both a woman and a person of color, so if you criticize anything I say, you’re both sexist and racist.”

Federal regulators have increasingly looked to crack down on the use of deepfake technology to impersonate politicians after a New Hampshire man used Biden’s voice in a robocall attempting to stifle turnout in the state’s primary election earlier this year.

Public Citizen co-President Robert Weissman told The Associated Press that the post is likely to mislead the public.

Advertisement
Advertisement

“I don’t think that’s obviously a joke,” Weissman said. “I’m certain that most people looking at it don’t assume it’s a joke. The quality isn’t great, but it’s good enough. And precisely because it feeds into preexisting themes that have circulated around her, most people will believe it to be real.”

Public Citizen has advocated for federal regulation of generative AI. Weissman said the video is “the kind of thing that we’ve been warning about.”

The video recalls rising concerns about deepfakes and AI technology in political advertisements, with ads using the technology already being used in the 2024 election cycle in the U.S. and in elections abroad.

The Federal Communications Commission (FCC) advanced a proposal last week to force advertisers to disclose the use of AI in advertisements on TV and radio. The use of mimic voices is already banned for use in robocalls.

Advertisement
Advertisement

“Bad actors are already using AI technology in robocalls to mislead consumers and misinform the public. That’s why we want to put in place rules that empower consumers to avoid this junk and make informed decisions,” FCC Chair Jessica Rosenworcel said earlier this month.

The FCC proposal would not apply to ads and video online and on streaming services, including the video shared by Musk.

Updated: 10:18 a.m.

Copyright 2024 Nexstar Media, Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.

For the latest news, weather, sports, and streaming video, head to The Hill.

Advertisement
Advertisement
Advertisement