Google Gemini's diversity debacle is the weirdest AI incident yet

 A screenshot of Google Gemini.
A screenshot of Google Gemini.

Several controversies have plagued the advent of generative AI models since text-to-image began to hit the mainstream in 2021. From copyright issue to the risk to jobs, many feel that AI poses an existential threat to certain creative professions. And then there's the question of bias, with several models accused perpetuating social inequality in their results.

To combat this, AI companies have made efforts to ensure their models promote diversity. But Google's Gemini model has drawn ridicule this week for taking the idea too far – the natural conclusion being, believe it or not, racially diverse Nazis.

A screenshot of Google Gemini
A screenshot of Google Gemini

Twitter (sorry, X) is currently awash with screenshots of Gemini users asking the tool to generate images of specifically white historical figures. But the well-intentioned model repeatedly produces a diverse range of figures, depicting the likes of US founding fathers and the aforementioned Nazis as, in some cases, people of colour.

Google has since apologised and paused Gemini's ability to generate images of people. “We’re aware that Gemini is offering inaccuracies in some historical image generation depictions,” the company shared on X. “We’re working to improve these kinds of depictions immediately. Gemini’s AI image generation does generate a wide range of people. And that’s generally a good thing because people around the world use it. But it’s missing the mark here.”

Indeed, this is a pretty unexpected episode in the generative AI saga of recent years. The issue of bias in AI is very real, but such a dramatic overcorrection – from one of the word's biggest tech companies, no less – is almost comical. But of course, for many artists, the question of generative AI continues to be anything but funny.