As the AI race intensifies, chatbots can now deliver detailed answers with a single prompt. But as their numbers grow, questions are emerging over which chatbot offers the most accurate information. Increasingly, these systems are beginning to reference each other. In this overlap, the conversational, AI-generated encyclopedia developed by Elon Musk’s xAI is now appearing as a source in ChatGPT-generated responses.
What is Grokipedia?
xAI launched Grokipedia in October. Musk has said that Wikipedia was biased against conservatives. The Guardian reported that many of the articles seemed to be copied directly from Wikipedia. Grokipedia also claimed that pornography contributed to the AIDS crisis, offered “ideological justifications” for slavery, and used denigrating terms for transgender people.
ChatGPT cited Grokipedia as a source.
The Guardian reported that recent searches on ChatGPT cited Elon Musk’s Grokipedia as a source to address queries, including those on Iranian conglomerates and Holocaust deniers, raising serious concerns about misinformation on the platform amid the rift between OpenAI and Musk.k
Grokipedia was launched in October, which is an AI-generated online Wikipedia that has been criticised for promoting right-wing narratives on various topics, including gay marriage and the 6 January insurrection in the US. The answers are embedded and use AI-generated responses, and do not allow editing, as Wikipedia does.
In recent Guardian tests, the latest ChatGPT model cited Grokipedia 9 times in responses to more than a dozen questions. Grokipedia has entered the knowledge loop after being cited as the source for various responses.
ChatGPT did not cite Grokipedia when directly prompted to repeat misinformation on the US Capitol insurrection, alleged media bias against Donald Trump, or the HIV/Aids epidemic, areas where Grokipedia has been widely reported to promote falsehoods. Instead, references to Grokipedia appeared when the model was asked about more obscure topics.
After the Grok controversy, the information shield and scrutiny of AI chatbots have increased, leading to falsehoods in news and other sources.
An OpenAI spokesperson said the model’s web search “aims to draw from a broad range of publicly available sources and viewpoints”.
“We apply safety filters to reduce the risk of surfacing links associated with high-severity harms, and ChatGPT clearly shows which sources informed a response through citations,” they said, adding that they had ongoing programs to filter out low-credibility information and influence campaigns.


