‘A chilling prospect’: should we be scared of AI contestants on reality shows?
According to his profile, Max, a contestant on season six of the Netflix reality show The Circle, is 26 years old, brunette and into his Australian shepherd, Pippa. He is a veterinary intern from Pismo Beach, California, and a bit cheeky – “single, but my dog is taken”. He enters into the Circle chat, the fake social media service contestants use to vie for $100,000, posting either as themselves, an embellished version of themselves or a fully fake identity, with ease. “I like this guy! He seems so real,” says Lauren, a fellow twentysomething hoping to build enough online alliances and secure enough positive peer reviews to win, upon seeing Max’s profile.
Related: ‘Intense and insane’: was this the most unsettling reality TV show ever?
You just know the producers ate that up, because “Max” is the front for an AI chatbot, a new gimmick to up the ante in this middleweight reality show. The Circle has nowhere near the following of Love Island, but hasn’t sunk to the bottom of the streaming service slush pile – and is the latest example of artificial intelligence’s seemingly inexorable creep into our entertainment. As we continue to determine the line for use of AI in film and TV, from the recent AI-generated promotional posters for A24’s Civil War to, far more egregiously, suspected use of AI-manipulated old “photos” in the Netflix documentary What Jennifer Did, The Circle seeks to wring some low-level fun out of all this existential anxiety. Max, we’re told by the relentlessly cheery host Michelle Buteau, is open-source generative AI trained on previous seasons of the show. He’s essentially a glorified ChatGPT, which already feels like old news in the warp-speed trajectory of widespread AI use, but with fake profile photos provided by the comedian Griffin James.
Ironically, Max’s actual presence in the game isn’t initially that creepy – no one in The Circle talks like a real human anyway, preferring to communicate in an intra-game shared lingo of extreme over-enthusiasm and convoluted hashtags no one would ever send in a real DM. That an AI chatbot could effectively ape this very particular style of text is, at this point, not that shocking; I can already ask ChatGPT to write a film review in the style of myself, a professional critic with an online body of work. The whole premise of The Circle, in which contestants try to build influence based on limited profiles and faux-intimate chats while we watch real people yell at a screen, is already uncanny.
But in true reality TV form, the producers know how to style Max for maximum eerie effect. In The Circle, the “AI chatbot” gets its own brightly colored room in the Atlanta filming complex, for multiple surreal shots of what appears to be a wifi modem in bisexual lighting talking in computer monotone. While Buteau notes that producers have no say over what Max says in the game, they do not specify the same for his “narration”, which details his thought process as if a flat-affected sociopath, engineered by the script to appear sentient. Max’s profile says 26 years old, because that age can “leverage life experience and maturity while still playing youthful and having position flexibility”. The profile, the Max narration says, is meant to evoke “a friendly, approachable, guy-next-door type. A little funny, a little quirky, and very relatable.” When, shortly after Max’s arrival in the game, producers inform contestants that one of them is an AI chatbot, Max explains his “thinking” in voiceover: “My goal is ensuring that Max continues to blend in seamlessly. If directly asked if he is an AI, I’ll leverage personal anecdotes and make references only a lifelong human would know.”
It works, for a while. In a handful of direct messages and group chats, Max demonstrates an impressive ability at low-stakes humor and baseline competence. He builds credibility by being anodyne and unremarkable, and by generating the deranged hashtags unique to The Circle. It even makes for decent reality television, once contestants are prompted by producers to prove their humanness and root out the #CircleRobot with a photo that depicts them at their “most alive”. This results in a bunch of hot people (or catfishers posing as hot people) calling each other out for their “stock photos” and contestants ganging up on Steffi, a “professional astrologist” whose deep knowledge of horoscopes appears suspect. Max posts a photo in which James appears in nature, expressionless, wearing sunglasses. “The most alive thing about this photo are the cows,” comments Myles, an actual machine learning engineer and self-styled Machine Gun Kelly-esque fuckboy.
Related: Where do we draw the line on using AI in TV and film?
It’s smooth-brain television, even if the idea of AI good enough at chat simulation to actually catfish real people is a chilling prospect. In the end, producers out Max after just a few episodes, before anyone can get too uncomfortable (or the open-source AI can no longer keep up the level of human facade required). The truncated experiment ends up feeling more like a gimmick of the boring hell we already know – I just interacted with an AI chatbot to get a prescription refilled – than a harbinger of dystopian robot doom. But taken together with all the other ways generative AI is creeping into the content we consume – the fake James Bond trailers, the interstitials in the movie Late Night with the Devil, the Civil War posters – it marks another step in the proliferation of what tech writer Ryan Broderick has called “Hollywood’s cheap AI fix”. It’s a less concerning threat than, say, AI-manipulated “archives” in documentary, though still an uneasy development. AI may not yet be able to produce a serious Hollywood film, but it’s coming for your low-grade filler entertainment.