Real products, fake endorsements: Why experts say AI-generated ads will get tougher to spot
MrBeast became the biggest YouTuber in the world partly because of his elaborate giveaways.
He once handed out thousands of free Thanksgiving turkeys and left a waitress a $10,000 tip for two glasses of water. So when a video appeared to show him offering newly released iPhones to thousands of people for the low price of $2, it seemed like one of his typical stunts.
One problem: It wasn’t really him. That video, he said, was the work of someone who used artificial intelligence to replicate his likeness without his permission.
“Are social media platforms ready to handle the rise of AI deepfakes?” wrote MrBeast, whose real name is Jimmy Donaldson, in a post on X, formerly Twitter. “This is a serious problem.”
Lots of people are getting this deepfake scam ad of me… are social media platforms ready to handle the rise of AI deepfakes? This is a serious problem pic.twitter.com/llkhxswQSw
— MrBeast (@MrBeast) October 3, 2023
Welcome to the world of deepfake advertising, where the products might be real, but their endorsements are anything but. It's where videos appearing to show celebrities plugging items from dental plans to cookware are in fact just AI-generated fabrications that use technology to alter voices, appearances and actions.
Of course, fake celebrity endorsements have been around for about as long as celebrities themselves. What has changed is the quality of the tools used to create them. So, instead of merely stating that a celebrity endorses a product, they can fabricate a video that appears to prove it, bilking unsuspecting consumers.
With a few clicks and a little bit of know-how, a savvy scammer can generate audio, video and still images that are increasingly difficult to identify as fabrications – even if, in the realm of advertising, it is still in its relative infancy.
“It’s not huge as of yet, but I think there’s still a lot of potential for it to become a lot bigger because of the technology, which is getting better and better,” said Colin Campbell, an associate professor of marketing at the University of San Diego who has published research about AI-generated ads.
Tom Hanks, Gayle King among celebrities targeted in AI scams
There is no shortage of nefarious uses for AI technology.
An artificially generated robocall used President Joe Biden's voice to urge voters in New Hampshire to sit out the primary election in that state. And fabricated sexually explicit images of pop star Taylor Swift circulated online last month, leading to increased calls for regulation.
On Friday, a host of major technology companies signed a pact to work to prevent AI tools from being used to disrupt elections.
More: Tech giants pledge crackdown on 2024 election AI deepfakes. Will they keep their promise?
But the technology is also being used to reach more directly into people's pocketbooks with fabricated product endorsements.
“It places the burden on people who are bombarded with information to then be the arbiters of … protecting their financial selves, on top of everything else,” said Britt Paris, an assistant professor at Rutgers University who studies AI-generated content. “The people that make these technologies available, the people that are really profiting off of deepfake technologies … they don’t really care about everyday people. They care about getting scale and getting profit as soon as they can.”
Actor Tom Hanks and broadcaster Gayle King are among those who have said their voices and images were altered without their consent and attached to unauthorized giveaways, promotions and endorsements.
“We’re at a new crossroads here, a new nexus of what types of things are possible in terms of using someone’s likeness,” Paris said.
Similar endorsement claims have been debunked by USA TODAY, including those asserting Kelly Clarkson endorsed weight-loss keto gummies and an Indian billionaire promoted a trading program. The video appearing to show Clarkson was viewed more than 48,000 times.
Yet they keep popping up, in part because they’re so easy to create.
A USA TODAY search of Meta’s ad library revealed multiple videos that appeared to be AI-generated fabrications. They claim to show Elon Musk giving away gold bars and Jennifer Aniston and Jennifer Lopez offering liquid botox kits.
“Any time that someone can not pay an actor or a celebrity to appear in their advertisements, they’ll probably do it, right?” Paris said. “These smaller scammer companies ... will definitely use the tools at their disposal to eke out whatever money they can from people.”
'The software’s pretty easy to use’
Creators of those fake endorsements typically follow a straightforward process, experts say.
They start with a text-to-speech program that generates audio from a written script. Other programs can use a small sample of authentic audio from a given celebrity to recreate the voice, sometimes with as little as a minute of real audio, said Siwei Lyu, a digital media forensics expert at the University at Buffalo.
Other programs create lip movements to match spoken words in the audio track. That video is then overlaid onto the person’s mouth, Lyu said.
“All the software’s pretty easy to use,” Lyu said.
Those videos are also easy to produce in bulk and tailor to specific audiences, leading to another problem: Videos that don't spread widely can be tougher to find – and tougher to police. For example, there were 63 versions of the purported Lopez and Aniston ad in the Meta ad library. Many were active for only a day or two, accumulating a few hundred views before they were deleted and replaced by new ones.
“In most cases, they don’t go everywhere,” Campbell said. “So you can just target certain groups of consumers, and only those people will see them. So it becomes harder to detect these, especially if they’re targeting people who are less educated or just less aware of what might actually be happening.”
For the moment, it's still possible to spot clues with the naked eye that those AI-generated videos are not real. Teeth and tongues are difficult to artificially recreate, Lyu said. Sometimes, a fake video is too perfect and leaves out pauses, breaths or other imperfections of human speech.
But the technology has come so far in such a short period of time that a fabricated video may be indistinguishable from an authentic one in as soon as “a couple of years,” Campbell said.
“The video tools are not as good as the image-based stuff,” he said. “But video is essentially just a bunch of images put together, right? So it’s just a matter of processing power and getting more experience with it.”
Think critically, use online AI detection tools
Social media users have a few tactics at their disposal to protect themselves. Some were identified by the Better Business Bureau in a warning issued in April 2023.
The main one: Think critically.
“Tom Hanks, it would seem sort of strange that he might be selling dental insurance,” Paris said. “If it doesn’t pass the smell test, based on what you know about that particular celebrity, it’s probably not worth getting too worked up about, and certainly not sharing. At least, not believing it until you go in and do a little legwork, background research.”
Companies typically don’t limit their legitimate ads to a single social media platform. A real video posted to Facebook, for example, likely would show up on Instagram, TikTok and YouTube, too.
There are also several online detectors capable of determining to varying degrees of accuracy if an image is authentic or AI-generated.
Social media users not yet familiar with those tools and tips still have some time – but maybe not a lot of it – to get themselves up to speed.
“The fake commercials are, I’ll say, a threat,” Lyu said. “But not truly a danger for everyone – yet.”
Thank you for supporting our journalism. You can subscribe to our print edition, ad-free app or e-newspaper here.
USA TODAY is a verified signatory of the International Fact-Checking Network, which requires a demonstrated commitment to nonpartisanship, fairness and transparency.
This article originally appeared on USA TODAY: Better AI technology makes deepfake ads tougher to detect, experts say