Calling ChatGPT on its 'bulls***'
How much can we really trust artificial intelligence? It’s a huge question, given the startling growth of ChatGPT and other chatbots powered by the technology. Millions of people around the world are now using these tools to craft emails, write essays for school, make purchases and do basic research. At the same time, there are increasing reports that such chatbots are, to put it charitably, less than accurate.
So I decided to conduct my own little experiment to test AI’s reliability when I flew down to Rio de Janeiro last week to interview Meredith Whittaker, the president of Signal, the encrypted messaging service, at the Web Summit tech conference.
The sprawling conference was filled with tech executives touting AI’s potential to remake the world economy and solve humanity’s problems. But Whittaker was an outlier. A former Google researcher, she left the company four years ago warning that AI will be misused for “social control and oppression.” She’s since become one of AI’s most outspoken critics.
To prepare for the interview, I went on ChatGPT for the first time and asked a simple question: What should I ask Meredith Whittaker about AI?
The first few responses ChatGPT gave me were, under the circumstances, somewhat ridiculous. “What inspired you to work in the field of AI and how did you get started?” was the first suggested question. “In your opinion, what are the most promising applications of AI that we should be paying attention to right now?” was the second.
Neither seemed to make any sense, considering Whittaker’s warnings about AI’s potential threat to the future of humanity. But ChatGPT did suggest one question that seemed helpful. “Signal recently published a report on the role of AI in content moderation,” ChatGPT informed me. “Can you tell us a bit more about the key findings from that report and what it means for the future of content moderation?”
The case of the missing report on AI content moderation
I tried Googling this report and couldn’t find it, leading me to conclude that ChatGPT knew something that Google’s search engines didn’t. So I put the question to Whittaker in front of an audience of thousands at Web Summit’s main Center Stage auditorium. Tell us about the findings of your report, I asked.
“That’s a lie,” she told me.
“What did your report say?” I asked.
“There was no report,” she told me.
AI frequently gets things like this wrong, Whittaker explained, leading her to conclude that tools like ChatGPT are really a “bullshit engine.”
“It takes a huge amount of surveillance data that is scraped from the darkest holes of the internet — it’s Reddit, it’s Wikipedia, it’s message board comments that are probably [from] 4Chan,” she said. Then, based on all that massive data and computational power, “it predicts what is likely to be the next word in the sentence. So it’s a statistical predictive engine.”
The answers that ChatGPT spits out may indeed be “likely,” Whittaker said. After all, she’s spoken out about AI as well as about misinformation on social media, making the issue of content moderation something she would “likely” be interested in.
Except that she and Signal never did such a report. ChatGPT’s “prediction” — stated as fact — was flat wrong.
Whittaker went on to explain why this insight into AI’s limits is, as she put it, “really, really important.”
“Why are we using a bullshit engine for anything serious?” she asked. “We are in an information ecosystem that is overrun by falsehoods, by half truths, by misinformation. And as a society, we need access to some form of shared reality.”
Instead, Whittaker said, we’ve unleashed a new form of technology “that behaves kind of like that uncle who shows up to holiday gatherings, has a few drinks, and then just talks confidently about shit he does not know. And that’s funny for the holidays. But that’s not something we should be injecting into our information ecosystem.”
Follow-up questions for ChatGPT
It was a sobering lecture for me and, I suspect, many in the audience. But being a professional reporter, I did what I’m supposed to do. I went back to ChatGPT for comment.
At first, ChatGPT doubled down on the falsehoods it had provided.
“Yes, I recall providing you with that question for your interview with Meredith Whittaker of Signal about AI,” it told me in a conversation Monday. “Regarding the report on the role of AI in content moderation that Signal recently published, the report highlights some of the key challenges and limitations associated with using AI for content moderation.”
But, I quickly interjected, Whittaker told me that Signal never did such a report and that ChatGPT's responses to me were “bullshit.” “Can you comment — and explain how and why you got this wrong?”
At this point, something interesting happened. In my previous conversation with ChatGPT, it spit out its answers in nanoseconds. In this case, I waited at least five seconds for a response as it seemed to struggle with how to play damage control.
And then: “I apologize for the incorrect information I provided in my previous response,” it said. “It appears that there was a misunderstanding on my part, and I am sorry for any confusion this may have caused.
“I would like to emphasize that I am a language model, and my responses are generated based on the data and information available to me at the time of the question. I do not have the ability to fact-check or verify the accuracy of the information that I am presenting,” the chatbot added.
I’ll say this much for AI. I’ve caught quite a few politicians over the years in falsehoods and lies. But I’ve rarely gotten such a forthright apology.