AI doomers have warned of the tech-pocalypse — while doing their best to accelerate it

Thumbs Up Drowning In Data Code Photo illustration by Salon/GEtty Images
Thumbs Up Drowning In Data Code Photo illustration by Salon/GEtty Images

One of the most prominent narratives about AGI, or artificial general intelligence, in the popular media these days is the “AI doomer” narrative. This claims that we’re in the midst of an arms race to build AGI, propelled by a relatively small number of extremely powerful AI companies like DeepMind, OpenAI, Anthropic, and Elon Musk’s xAI (which aims to design an AGI that uncovers truths about the universe by eschewing political correctness). All are backed by billions of dollars: DeepMind says that Microsoft will invest over $100 billion in AI, while OpenAI has thus far received $13 billion from Microsoft, Anthropic has $4 billion in investments from Amazon, and Musk just raised $6 billion for xAI.

Many doomers argue that the AGI race is catapulting humanity toward the precipice of annihilation: if we create an AGI in the near future, without knowing how to properly “align” the AGI’s value system, then the default outcome will be total human extinction. That is, literally everyone on Earth will die. And since it appears that we’re on the verge of creating AGI — or so they say — this means that you and I and everyone we care about could be murdered by a “misaligned” AGI within the next few years.

These doomers thus contend, with apocalyptic urgency, that we must “pause” or completely “ban” all research aiming to create AGI. By pausing or banning this research, it would give others more time to solve the problem of “aligning” AGI to our human “values,” which is necessary to ensure that the AGI is sufficiently “safe.” Failing to do this means that the AGI will be “unsafe,” and the most likely consequence of an “unsafe” AGI will be the untimely death of everyone on our planet.

The doomers contrast with the “AI accelerationists,” who hold a much more optimistic view. They claim that the default outcome of AGI will be a bustling utopia: we’ll be able to cure diseases, solve the climate crisis, figure out how to become immortal, and even colonize the universe. Consequently, these accelerationists — some of whom use the acronym “e/acc” (pronounced “ee-ack”) to describe their movement — argue that we should accelerate rather than pause or ban AGI research. There isn’t enough money being funneled into the leading AI companies, and calls for government regulation are deeply misguided because they’re only going to delay the arrival of utopia.

Some even contend that “any deceleration of AI will cost lives. Deaths that were preventable by the AI that was prevented from existing is a form of murder.” So, if you advocate for slowing down research on advanced AI, you are no better than a murderer.

But there’s a great irony to this whole bizarre predicament: historically speaking, no group has done more to accelerate the race to build AGI than the AI doomers. The very people screaming that the AGI race is a runaway train barreling toward the cliff of extinction have played an integral role in starting these AI companies. Some have helped found these companies, while others provided crucial early funding that enabled such companies to get going. They wrote papers, books and blog posts that popularized the idea of AGI and organized conferences that inspired interest in the topic. Many of those worried that AGI will kill everyone on Earth have gone on to work for the leading AI companies, and indeed the two techno-cultural movements that initially developed and promoted the doomer narrative — namely, “Rationalism” and “Effective Altruism” — have been at the very heart of the AGI race since its inception.

In a phrase, the loudest voices within the AI doomer camp have been disproportionately responsible for launching and sustaining the very technological race that they now claim could doom humanity in the coming years. Despite their apocalyptic warnings of near-term annihilation, the doomers have in practice been more effective at accelerating AGI than the accelerationists themselves.

Consider a few examples, beginning with the Skype cofounder and almost-billionaire Jaan Tallinn, who also happens to be one of the biggest financial backers of the Rationalist and Effective Altruist (EA) movements. Tallinn has repeatedly claimed that AGI poses an enormous threat to the survival of humanity. Or, in his words, it is “by far the biggest risk” facing us this century — bigger than nuclear war, global pandemics or climate change.

In 2014, Tallinn co-founded a Boston-based organization called the Future of Life Institute (FLI), which has helped raise public awareness of the supposedly grave dangers of AGI. Last year, FLI released an open letter calling on “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4,” where GPT4 was the most advanced system that OpenAI had released at the time. The letter warns that AI labs have become “locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one — not even their creators — can understand, predict, or reliably control,” resulting in a “dangerous race.” Tallinn was one of the first signatories.

Tallinn is thus deeply concerned about the race to build AGI. He’s worried that this race might lead to our extinction in the near future. Yet, through his wallet, he has played a crucial role in sparking and fueling the AGI race. He was an early investor in DeepMind, which Demis Hassabis, Shane Legg and Mustafa Suleyman cofounded 2010 with the explicit goal of creating AGI. After OpenAI started in 2015, he had a close connection to some people at the company, meeting regularly with individuals like Dario Amodei, a member of the EA movement and “a key figure in the direction of OpenAI.” (Tallinn himself is closely aligned with the EA movement.)

Want more health and science stories in your inbox? Subscribe to Salon's weekly newsletter Lab Notes.

In 2021, Amodei and six other former employees of OpenAI founded Anthropic, a competitor of both DeepMind and OpenAI. Where did Anthropic get its money? In part from Tallinn, who donated $25 million and led a $124 million series A fundraising round to help the company get started.

Here we have one of the leading voices in the doomer camp claiming that the AGI race could result in everyone on Earth dying, while simultaneously funding the biggest culprits in this reckless race toward AGI. I’m reminded of something that Noam Chomsky once said in 2002, during the early years of George Bush’s misguided “War on Terror.” Chomsky declared: “We certainly want to reduce the level of terror,” he said, referring to the U.S. “There is one easy way to do that … stop participating in it.” The same idea applies to the AGI race: if AI doomers are really so worried that the race to build AGI will lead to an existential catastrophe, then why are they participating in it? Why have they funded and, in some cases, founded the very companies responsible for supposedly pushing humanity toward the precipice of total destruction?

In fact, Amodei, Shane Legg, Sam Altman and Elon Musk — all of whom founded or cofounded some of the leading AI companies — have expressed doomer concerns that AGI could annihilate our species in the near term. In an interview with the EA organization 80,000 Hours, Amodei referenced the possibility that “an AGI could destroy humanity,” saying “I can’t see any reason in principle why that couldn’t happen.” He adds that “this is a possible outcome and at the very least as a tail risk we should take it seriously.”

Similarly, DeepMind cofounder Shane Legg wrote on the website LessWrong in 2011 that AGI is his “number 1 risk for this century.” That was one year after DeepMind was created. In 2015, the year he co-founded OpenAI with Elon Musk and others, Altman declared that “I think AI will … most likely sort of lead to the end of the world,” adding on his personal blog that the “development of superhuman machine intelligence is probably the greatest threat to the continued existence of humanity.”

Then there’s Musk, who has consistently identified AGI as the “biggest existential threat,” and “far more dangerous than nukes.” In early 2023, Musk signed the open letter from FLI calling for a six month “pause” on advanced AI research. Just four months later, he announced that he was starting yet another AI company: xAI.

Over and over again, the very same people saying that AGI could kill us all have done more than anyone else to launch and accelerate the race toward AGI. This is even true of the most famous doomer in the world today, a self-described “genius” named Eliezer Yudkowsky. In a Time magazine article from last year, Yudkowsky argued that our only hope of survival is to immediately “shut down” all of “the large computer farms where the most powerful AIs are refined.” Countries should sign an international treaty to halt AGI research and be willing to engage in military airstrikes against rogue datacenters to enforce this treaty.

Yudkowsky is so worried about the AGI apocalypse that he claims we should be willing to risk an all-out thermonuclear war that kills nearly everyone on Earth to prevent AGI from being built in the near future. He then gave a TED talk in which he reiterated his warnings: if we build AGI without knowing how to make it “safe” — and we have no idea how to make it “safe” right now, he claims — then literally everyone on Earth will die.

Yet I doubt that any single individual has promoted the idea of AGI more than Yudkowsky himself. In a very significant way, he put AGI on the map, inspired many people involved in the current AGI race to become interested in the topic, and organized conferences that brought together early AGI researchers to cross-pollinate ideas.

Consider the Singularity Summit, which Yudkowsky co-founded with the Google engineer Ray Kurzweil and tech billionaire Peter Thiel in 2006. This summit, held annually until 2012, focused on the promises and perils of AGI, and included the likes of Tallinn, Hassabis, and Legg on its list of speakers. In fact, both Hassabis and Legg gave talks about AGI-related issues in 2010, shortly before co-founding DeepMind. At the time, DeepMind needed money to get started, so after the Singularity Summit, Hassabis followed Thiel back to his mansion, where Hassabis asked Thiel for financial support to start DeepMind. Thiel obliged, offering Hassabis $1.85 million, and that’s how DeepMind was born. (The following year, in 2011, is when Tallinn made his early investment in the company.)

If not for Yudkowsky’s Singularity Summit, DeepMind might not have gotten off the ground — or at least not when it did. Similar points could be made about various websites and mailing lists that Yudkowsky created to promote the idea of AGI. For example, AGI has been a major focus of the community blogging website LessWrong, created by Yudkowsky around 2009. This website quickly became the online epicenter for discussions about how to build AGI, the utopian future that a “safe” or “aligned” AGI could bring about, and the supposed “existential risks” associated with AGIs that are “unsafe” or “misaligned.” As noted above, it was on the LessWrong website that Legg identified AGI to be the number one threat facing humanity, and records show that Legg was active on the website very early on, sometimes commenting directly under articles by Yudkowsky about AGI and related issues.

Or consider the SL4 mailing list that Yudkowsky created in 2001, which described itself as dedicated to “advanced topics in transhumanism and the Singularity, including … strategies to accelerate the Singularity.” The Singularity is a hypothetical future event in which advanced AI begins to redesign itself, leading to a “superintelligent” AGI system over the course of weeks, days, or perhaps even minutes. Once again, Legg also contributed to the list, which indicates that the connections between Yudkowsky, the world’s leading doomer, and Legg, cofounder of one of the biggest AI companies involved in the AGI race, goes back more than two decades.

These are just a few reasons that Altman himself wrote on Twitter (now X) last year that Yudkowsky — the world’s leading AI doomer — has probably contributed more than anyone to the AGI race. In Altman’s words, Yudkowsky “got many of us interested in AGI, helped DeepMind get funding at a time when AGI was extremely outside the Overton window, was critical in the decision to start OpenAI, etc.” He then joked that Yudkowsky may “deserve the Nobel Peace Prize for this.” (These quotes have been lightly edited to improve readability.)

Though Altman was partly trolling Yudkowsky for complaining about a situation — the AGI race — that Yudkowsky was instrumental in creating, Altman isn’t wrong. As a New York Times article from 2023 notes, “Mr. Yudkowsky and his writings played key roles in the creation of both OpenAI and DeepMind.” One could say something similar about Anthropic, as it was Yudkowsky’s blog posts that convinced Tallinn that AGI could be existentially risky, and Tallinn later played a crucial role in helping Anthropic get started — which further accelerated the race to build AGI. The connections and overlaps between the doomer movement and the race to build AGI are extensive and deep — the more one scratches the surface, the clearer these links appear.

Indeed, I mentioned the Rationalist and EA movements earlier. Rationalism was founded by Yudkowsky via the LessWrong website, while EA emerged around the same time, in 2009, and could be seen as the sibling of Rationalism. These communities overlap considerably, and both have heavily promoted the idea that AGI poses a profound threat to our continued existence this century.

Yet Rationalists and EAs are also some of the main participants and contributors to the very race they believe could precipitate our doom. As noted above, Dario Amodei (co-founder of Anthropic) is an EA, and Tallinn has given talks at major EA conferences and donated tens of millions of dollars to both movements. Similarly, an Intelligencer article about Altman reports that Altman once embraced EA, and a New York Times profile describes him as the product of a strange, sprawling online community that began to worry, around the same time Mr. Altman came to the Valley, that artificial intelligence would one day destroy the world. Called rationalists or effective altruists, members of this movement were instrumental in the creation of OpenAI.

Yet another New York Times article notes that the EA movement “beat the drum so loudly” about the dangers of AGI that many young people became inspired to work on the topic. Consequently, “all of the major AI labs and safety research organizations contain some trace of effective altruism’s influence, and many count believers among their staff members.” The article then observes that “no major AI lab embodies the EA ethos as fully as Anthropic,” given that “many of the company’s early hires were effective altruists, and much of its start-up funding came from wealthy EA-affiliated tech executives” — not just Tallinn, but the co-founder of Facebook Dustin Moskovitz, who, like Tallinn, has donated considerably to EA projects.

There is a great deal to say about this topic, but the key point for our purposes is that the doomer narrative largely emerged out of the Rationalist and EA movements — the very movements that have been pivotal in founding, funding and inspiring all the major AI companies now driving the race to build AGI.

Again, one wants to echo Chomsky in saying: if these communities are so worried about the AGI apocalypse, why have they done so much to create the very conditions that enabled the AGI race to get going? The doomers have probably done more to accelerate AGI research than the AI accelerationists that they characterize as recklessly dangerous.

How has this happened? And why? One reason is that many doomers believe that AGI will be built by someone, somewhere, eventually. So it might as well be them who builds the first AGI. After all, many Rationalists and EAs pride themselves on having exceptionally high IQs while claiming to be more “rational” than ordinary people, or “normies.” Hence, they are the best group to build AGI while ensuring that it is maximally “safe” and “beneficial.” The unfortunate consequence is that these Rationalists and EAs have inadvertently initiated a race to build AGI that, at this point, has gained so much momentum that it appears impossible to stop.

Even worse, some of the doomers most responsible for the AGI race are now using this situation to gain even more power by arguing that policymakers should look to them for the solutions. Tallinn, for example, recently joined the United Nation’s Artificial Intelligence Advisory Body, which focuses on the risks and opportunities of advanced AI, while Yudkowsky has defended an international policy that leaves the door open to military strikes that might trigger a thermonuclear war. These people helped create a huge, complicated mess, then turned around, pointed at that mess, and shouted: “Oh, my! We’re in such a dire situation! If only governments and politicians would listen to us, though, we just might be able to dodge the bullet of annihilation.”

This looks like a farce. It’s like someone drilling a hole in a boat and then declaring: “The only way to avoid drowning is to make me captain.”

The lesson is that governments and politicians should not be listening to the very people — or the Rationalist and EA movements to which they belong — that are disproportionately responsible for this mess in the first place. One could even argue — plausibly, in my view — that if not for the doomers, there probably wouldn’t be an AGI race right now at all.

Though the race to build AGI does pose many dangers, the greatest underlying danger is the Rationalist and EA movements that spawned this unfortunate situation over the past decade and a half. If we really want to bring the madness of the AGI race to a stop, it’s time to let someone else have the mic.