Eminem's MTV Awards show featured the most convincing live deepfake we've seen
When you buy through links on our articles, Future and its syndication partners may earn a commission.
A highlight of last night's MTV Video Music Awards (VMA) was Eminem performing alongside his late 90s alter ego, Slim Shady. After initially entering the stage flanked by an entourage of less convincing impersonators, the real Marshall Mathers (now bearded) was joined by what appeared to be a younger version of himself straight from 1999.
Of course, this was not the real Slim Shady but an AI-powered digital recreation, and one of the best deepfakes we've seen executed in front of a live audience. The performance was a perfect demonstration of the work done by a team that scooped the night's VMA for best VFX.
For more tech innovation, see our Next Gen Creative Tech page.
Metaphysic uses an AI-powered workflow and facial recreation to create digital characters. It worked with Eminem to bring Slim Shady back to life for the Houdini music video, which won the team the VFX VMA alongside Synapse VP and director Rich Lee.
Seeing the technology in a live setting was even more impressive. For the performance at the UBC Arena in New York, a stand-in acted out Slim Shady’s dance moves, while Metaphysic's applied its face swap process in real time, in-camera for broadcast viewers and on a large screen for the live audience. The result is a believable recreation of Slim Shady that stands up to close scrutiny.
The team stressed that the success of its work depends not just on the technology but also in the collaboration between the AI double and human performances. The convincing performances allowed the AI models to capture and enhance the nuances of Slim Shady’s look and behaviour.
In the Houdini video, which culminates in a rooftop standoff where the two characters merge into a unified Eminem, Marshall Mathers played both himself and Slim Shady. Metaphysic then used AI to synthesise the latter’s look. To achieve the high fidelity required, the team trained AI models on data from Slim Shady’s late 90s prime to recreate that young Slim Shady, complete with signature bleached hair. It took just three weeks to complete, showcasing the huge potential of AI for visual effects.
For more AI news, see this week's Adobe Firefly AI video reveal, which finally gave us a glimpse of what Adobe has planned for generative text-to-video in Premiere Pro and After Effects.