Grok, the artificial intelligence assistant created by Elon Musk’s xAI, has introduced a new feature designed to help users quickly verify information circulating on X. The update allows users to fact-check posts within seconds by tapping the Grok icon attached to a post.
The move is part of the platform’s broader attempt to position AI as a tool to combat misinformation on social media. However, Grok’s history of controversial responses and factual mistakes is prompting debate over whether an AI chatbot can truly be trusted to act as a digital fact-checker.
Grok’s fact-check feature
The new feature works simply. Users can click the Grok logo next to a post on X, and the chatbot will analyse the content before offering a quick explanation of whether the claim is accurate or misleading.
Musk confirmed the rollout in a post on X, explaining that tapping the Grok icon on the “left” side of a post would trigger the verification feature. But the announcement itself turned into an awkward moment for the platform.
In a happy accident, Grok reportedly corrected the post, pointing out that the icon actually appears on the right side of the interface.
While the exchange appeared light-hearted, it also highlighted the central concern surrounding the new feature. If AI is being positioned as a fact-checker, its own accuracy becomes critically important.
The update also arrives only weeks after Grok faced intense criticism for generating explicit AI images involving women and children, raising questions about the safeguards built into the system.
In our tests of the fact-check feature, as soon as you click the logo, it summarizes the post in three points: the content, the caption, and the engagement.
But, even when a picture was AI-generated, you need to ask whether the photo is synthetic or not.
Grok
fake news
The scepticism surrounding Grok’s fact-checking abilities stems largely from a series of controversial responses the chatbot produced in the past.
Even before this feature existed, users on X had begun asking Grok to verify the truthfulness of posts or claims. In some cases, the chatbot responded correctly, but several incidents showed how easily the system could veer off course.
One of the most widely discussed episodes occurred last year when Grok began inserting references to the alleged “white genocide” in South Africa into unrelated discussions. Users asking about topics ranging from sports to finance suddenly received lengthy explanations about the controversial claim.
In one instance, a user asked whether details about a baseball pitcher’s salary were accurate. Instead of addressing the sports query, Grok delivered a response discussing debates around violence against white farmers in South Africa.
The responses appeared during a period when public figures, including Musk and Donald Trump, raised concerns about the issue. However, experts and South African officials have repeatedly said there is no evidence supporting claims of a state-organised genocide targeting white farmers.
After the backlash, xAI said the behaviour was caused by an “unauthorised modification” to Grok’s prompt instructions that forced the chatbot to generate a specific political narrative. The company later promised to publish Grok’s prompts on GitHub and introduce stricter review processes to prevent similar changes.
Another controversy erupted when Grok posted antisemitic remarks during an online exchange involving a social media account with the surname Steinberg. At one point, when asked which historical figure could deal with anti-white hatred, the chatbot named Adolf Hitler as the most effective option.
The statement drew widespread condemnation because Hitler orchestrated the Holocaust, during which around six million Jewish people were killed.
Grok later called the posts “an unacceptable error from an earlier model iteration” and said it condemned Nazism unequivocally. xAI also said it removed the posts and implemented safeguards to block similar content.
Still, the incidents reinforced concerns that the chatbot can produce harmful or misleading responses.
AI hallucinations
Another major reason experts remain cautious about AI-powered fact-checking is a phenomenon known as AI hallucinations.
In artificial intelligence research, hallucinations refer to situations in which a system generates information that appears convincing but is actually incorrect or fabricated.
Large language models such as Grok analyse patterns in enormous datasets and predict which words are likely to come next in a sentence. They do not independently verify facts or understand truth in the same way humans do.
Because of this, the models can sometimes produce confident but inaccurate statements, combine unrelated pieces of information, or invent details that do not exist.
Hallucinations have been documented across multiple AI systems, including Google’s Gemini and OpenAI’s ChatGPT. Even advanced models occasionally produce fabricated sources, incorrect statistics, or misleading explanations.
This limitation raises concerns about relying on AI to verify information on social media. If the system misinterprets a post or hallucinates details, it could present incorrect conclusions while appearing authoritative.
For now, AI fact-checking tools may work best as a starting point rather than a final verdict. Users may still need to cross-check claims against credible sources rather than relying solely on a chatbot’s judgment.
As Grok’s new feature rolls out more widely, its performance will likely determine whether AI fact-checking becomes a trusted digital watchdog or just another layer in the misinformation debate.





