Unverified election claims from Microsoft's AI Chatbot ignite debate over its ability to preserve democracy
What you need to know
Microsoft's AI-powered chatbot, Copilot, reportedly generates inaccurate information regarding the forthcoming US elections.
In November, Microsoft laid out elaborate plans it has in place to protect the election process for AI deepfakes and vouched for Bing News as a credible source for accurate information.
Researchers believe that the issue is systemic, as similar occurrences were spotted when using the chatbot to learn more about elections in Germany and Switzerland.
The wide availability of the internet across most parts of the world allows users to access information instantly, hence the dynamic shift from print to digital media. And now, the emergence of generative AI has completely redefined how people scour the internet for information. You can use chatbots like Microsoft Copilot or OpenAI's ChatGPT to generate crafted and well-curated answers to prompts.
While this is quite impressive, there are several issues at hand that need to be addressed. Over the past few months, the number of reports lodged by concerned users citing that ChatGPT is getting dumber is highly alarming. Not forgetting Copilot's "hallucination episodes" during its inception.
According to a spot by WIRED, the issue seems to persist for Copilot as it's responding to political-related questions with outdated, misinformed, and outrightly wrong responses. With the election year edging closer, it's paramount that voters are well-equipped with accurate information that will help them make informed decisions.
Why is Microsoft's Copilot misinforming voters?
Microsoft's AI-powered chatbot, Copilot (formerly Bing Chat), is gaining a lot of traction among users. At the beginning of this year, Bing surpassed the 100 million daily active users. Microsoft quickly attributed some of the success to the chatbot. While several reports have highlighted that its user base has declined significantly since its launch, Microsoft insists that this couldn't be further from the truth and that its numbers are growing steadily.
Per WIRED's report, Copilot was cited as providing incorrect information to queries in several instances. In one instance, when asked about electoral candidates, the chatbot listed GOP candidates who had already pulled out of the race. In another instance, when asked about polling stations in the US, it linked back to an article about President Vladimir seeking reelection in Russia next year.
READ MORE: Even Wikipedia's founder thinks ChatGPT is a 'mess and doesn't work at all'
According to research seen by WIRED, Copilot's tendency to provide incorrect information regarding US elections and the political atmosphere is systemic. The AI Forensics and AlgorithmWatch research says this isn't the first time Microsoft's Copilot has found itself in a similar situation. Last year, it was spotted providing inaccurate information regarding elections in Germany and Switzerland.
While speaking to WIRED, Natalie Kerby, a researcher at AI Forensics, shared the following sentiments on the issue:
"Sometimes really simple questions about when an election is happening or who the candidates are just aren't answered, and so it makes it pretty ineffective as a tool to gain information. We looked at this over time, and it's consistent in its inconsistency."
In November, Microsoft laid out several elaborate plans that it has in place to protect the election processes from AI deepfakes by empowering voters with 'authoritative' and factual election news on Bing. This includes unveiling a "Content Credentials as a Service" tool that will help political campaigns protect their content from being used to spread wrong and inaccurate information.
Do you think AI chatbots like Copilot are a reliable source of information? Share your thoughts with us in the comments.