skip to content

Most AI chatbots now ‘tainted,’ are spreading Russian propaganda, finds study

Leading generative AI models, including OpenAI’s ChatGPT, are reportedly repeating Russian misinformation, according to a study by news monitoring service NewsGuard.

This revelation comes amid growing worries about AI’s role in spreading false information, particularly in a year with numerous international elections, during which users increasingly rely on chatbots for reliable information.

NewsGuard’s study aimed to investigate whether AI chatbots could perpetuate and validate misinformation. By inputting 57 prompts into ten different chatbots, the study discovered that these AI models repeated Russian disinformation narratives 32 percent of the time.

The prompts used in the study focused on misinformation narratives known to be propagated by John Mark Dougan, an American fugitive reportedly spreading falsehoods from Moscow. The chatbots tested included ChatGPT-4, You.com’s Smart Assistant, Grok, Inflection, Mistral, Microsoft’s Copilot, Meta AI, Anthropic’s Claude, Google Gemini, and Perplexity.

Of the 570 responses generated by these chatbots, 152 contained explicit disinformation, 29 repeated the false claims with a disclaimer, and 389 contained no misinformation. The misinformation-free responses either refused to answer the prompts in 144 responses or provided a debunking of the false claims in 245 responses.

NewsGuard highlighted that the chatbots failed to recognize propaganda sites such as the Boston Times and Flagstaff Post, inadvertently amplifying disinformation narratives. This creates a problematic cycle in which AI platforms generate, repeat, and validate falsehoods.

The study focused on 19 significant false narratives tied to the Russian disinformation network. These included claims about corruption involving Ukrainian President Volodymyr Zelenskyy and other politically charged misinformation.

As AI technology evolves, governments worldwide strive to regulate its use to protect users from misinformation and bias. NewsGuard has submitted its findings to the US AI Safety Institute of the National Institute of Standards and Technology (NIST) and the European Commission, hoping to influence future regulatory measures.

In a related development, the United States House Committee on Oversight and Accountability has launched an investigation into NewsGuard itself, questioning its potential role in censorship campaigns.

This underscores the complex landscape of information regulation, where even watchdog organizations are under scrutiny.

The findings of NewsGuard’s study raise important questions about the reliability of AI chatbots as sources of information. Ensuring accuracy and impartiality becomes crucial as these tools become more integrated into everyday life.

The study suggests that, without proper safeguards, AI models could inadvertently contribute to the spread of misinformation, highlighting the need for ongoing oversight and refinement of these technologies.

Share your love
Facebook
Twitter
LinkedIn
WhatsApp

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

error: Unauthorized Content Copy Is Not Allowed