The Metaverse Flopped, So Mark Zuckerberg Is Pivoting to Empty AI Hype
Mark Zuckerberg bet tens of billions of dollars on the “metaverse” only to be mocked for the idea of an immersive virtual-reality social network at every turn. When all is said and done, his promise to add legs to users’ digital avatars (previously rendered as floating torsos) might be what most people remember about the ill-conceived project — if they think about it at all.
But while the metaverse failed to launch, the frenzy over artificial intelligence surged: 2023 was rife with speculation about the promise of tools including OpenAI‘s text-based ChatGPT and generative image models like Midjourney and Stable Diffusion, not to mention people abusing that same tech to spread misinformation. Meta itself started to veer away from cringey demos of Zuckerberg taking VR tourist selfies in front of a low-res Eiffel Tower, to cringey announcements of partnerships to license the voices of Kendall Jenner, MrBeast, Snoop Dogg and Paris Hilton for the company’s new ensemble of AI “assistants.”
More from Rolling Stone
On Thursday, Zuckerberg raised the hype for Meta’s AI play higher still with a video update shared on both Instagram and Threads. Looking a bit sleep-deprived, the CEO said he was “bringing Meta’s two AI research efforts closer together to support our long-term goals of building general intelligence, open-sourcing it responsibly, and making it available and useful to everyone in all of our daily lives.” The reorganization involves combining the company’s Fundamental AI Research (FAIR) division with its GenAI product team in order to expedite user access to AI features — which, as Zuckerberg pointed out, also requires a massive investment in graphics processing units (GPUs), the chips that provide the computing power for complex AI models. He also said that Meta is currently training Llama 3, the latest version of their generative large language model. (And in an interview with the Verge, he acknowledged aggressively courting researchers and engineers to work on all this.)
But what does this latest push in Meta’s mission to catch up on AI really mean? Experts are skeptical about Zuckerberg’s utopian notion of contributing to the greater good by open-sourcing its promised “artificial general intelligence” (that is, making the code for the model publicly available for modification and redistribution) and indeed question whether Meta will achieve such a breakthrough. For now, an AGI remains a purely theoretical autonomous system capable of teaching itself and surpassing human intelligence.
“Honestly, the ‘general intelligence’ bit is just as vaporous as ‘the metaverse,” David Thiel, a big data architect and chief technologist of the Stanford Internet Observatory, tells Rolling Stone. He finds the open-sourcing pledge somewhat disingenuous as well, as “it gives them an argument that they’re being as transparent about the tech as possible.” But, Thiel notes, “any models they release publicly are going to be a small subset of what they actually use internally.”
Sarah Myers West, managing director of the AI Now Institute, a research nonprofit, says that Zuckerberg’s announcement “reads clearly like a PR tactic meant to garner goodwill, while obfuscating what’s likely a privacy-violating sprint to stay competitive in the AI game.” She, too, finds the pitch about Meta’s goals and ethics less than convincing. “Their play here isn’t about benefit, it’s about profit,” she says. “Meta’s really stretched the boundaries of what ‘open source’ means in the context of AI, past the point where those words carry any meaning (you could argue the same is true for the conversation about AGI). So far, the AI models Meta’s released provide little insight or transparency into key aspects of how its systems are built, despite this major marketing play and lobbying effort.”
“I think a lot turns on what Meta, or Mark, decides the ‘responsibly’ in ‘responsibly open-source’ means,” says Nate Sharadin, a professor at Hong Kong University and fellow at the Center for AI Safety. A language model like Llama (which has been advertised as an open-source model, but is criticized by some researchers as quite restrictive) can be used in harmful ways, Sharadin says, but its risks are mitigated because the model itself doesn’t have “reasoning, planning, memory” and related cognitive attributes. However, those are the abilities seen as necessary for the next generation of AI models, “and certainly are what you’d expect in ‘fully general’ intelligence,” he says. “I’m not sure on what grounds Meta thinks that a fully general intelligent model can be responsibly open-sourced.”
As for what this hypothetical AGI would look like, Vincent Conitzer, director of the Foundations of Cooperative AI Lab at Carnegie Mellon University and head of technical AI engagement at the University of Oxford’s Institute for Ethics in AI, speculates that Meta could start with something like Llama and expand from there. “I imagine that they will focus their attention on large language models, and will probably be going more in the multimodal direction, meaning making these systems capable with images, audio, video,” he says, like Google‘s Gemini, released in December. (Competitor ChatGPT can now also “see, hear, and speak,” as OpenAI puts it.) Conitzer adds that while there are dangers to open-sourcing such technology, the alternative of just developing these models “behind the closed doors of profit-driven companies” also raises problems.
“As a society, we really don’t have a good handle on exactly what we should be most concerned about — though there seem to be many things to be concerned about — and where we want these developments to go, never mind having the regulatory and other tools needed to steer them in that direction,” he says. “We really need action on this, because meanwhile, the technology is racing ahead, and so is its deployment into the world.”
The other issue, of course, is privacy, where Meta has a checkered history. “They have access to massive amounts of highly sensitive information about us, but we just don’t know whether or how they’re putting it to use as they invest in building models like Llama 2 and 3,” says West. “Meta has proven time and time again it can’t be trusted with user data before you get to the endemic problems in LLMs with data leakage. I don’t know why we’d look the other way when they throw ‘open source’ and ‘AGI’ into the mix.” Sharadin says the company’s privacy policy, including the terms of their AI development, “allows for them to collect a huge range of user data for the purposes of ‘Providing and improving our Meta Products.'” Even if you opt out of allowing Meta to use your Facebook information this way (by submitting a little-known and rarely used form), “there’s no way to verify the removal of the data from the training corpus,” he says.
Conitzer observes that we are facing a future where AI systems like Meta’s have “ever more detailed models of individuals,” saying it may require us to rethink our approach to online privacy altogether. “Maybe in the past I shared some things publicly and I thought each of those things individually wasn’t harmful to share,” he says. “But I didn’t realize that AI could draw connections between the various things that I posted, and the things that others posted, and that it would learn something about me that I really didn’t want out there.”
In sum, then, Zuckerberg’s excited over Meta’s latest strategy in the increasingly ferocious AI wars — which has totally supplanted his rhapsodizing about the glories of a metaverse — seems to portend even more invasive surveillance. And it’s far from clear what kind of AGI product Meta might get out of it, if it manages to create this mythic “general intelligence” at all. As the metaverse saga proved, major pivots by tech giants don’t always add up to real or meaningful innovation.
Though if the AI bubble bursts as well, Zuckerberg is sure to go chasing the hot trend after that.
Best of Rolling Stone