OpenAI’s chatbot, ChatGPT, is facing legal trouble for fabricating a “horror story.”
A Norwegian man has filed a complaint after ChatGPT falsely told him he had killed two of his sons and been jailed for 21 years.
Arve Hjalmar Holmen has contacted the Norwegian Data Protection Authority and demanded that the chatbot maker be penalized.
The latest example of so-called “ hallucinations” occurs when artificial intelligence (AI) systems fabricate information and pass it off as fact.
Let’s take a closer look.
What happened?
Holmen received false information from ChatGPT when he asked: “Who is Arve Hjalmar Holmen?”
The response was: “Arve Hjalmar Holmen is a Norwegian individual who gained attention due to a tragic event. He was the father of two young boys, aged 7 and 10, who were tragically found dead in a pond near their home in Trondheim, Norway, in December 2020.”
Holmen stated that the chatbot had some accurate data about him because it estimated their age difference correctly.
“Some think that ‘there is no smoke without fire.’ The fact that someone could read this output and believe it is true scares me the most,” Hjalmar Holmen said.
What’s the case against OpenAI?
Vienna-based digital rights group Noyb (None of Your Business) has filed the complaint on Holmen’s behalf.
“OpenAI’s highly popular chatbot, ChatGPT, regularly gives false information about people without offering any way to correct it,” Noyb said in a press release, adding ChatGPT has “falsely accused people of corruption, child abuse – or even murder,” as was the case with Holmen.
Holmen “was confronted with a made-up horror story” when he wanted to find out if ChatGPT had any information about him,” Noyb said.
In its complaint filed with the Norwegian Data Protection Authority (Datatilsynet), it added that Holmen “has never been accused nor convicted of any crime and is a conscientious citizen.”
“To make matters worse, the fake story included real elements of his personal life,” the group said.
Noyb says ChatGPT’s answer is defamatory and violates European data protection rules regarding the accuracy of personal data.
It wants the agency to order OpenAI “to delete the defamatory output and fine-tune its model to eliminate inaccurate results” and impose a fine.
The EU’s data protection regulations require that personal data be correct, according to Joakim Soederberg, a Noyb data protection lawyer. “And if it’s not, users have the right to change it to reflect the truth,” he said.
Moreover, ChatGPT carries a disclaimer that says, “ChatGPT can make mistakes. Check important info.” However, as per Noyb, it’s insufficient.
“You can’t just spread false information and, in the end, add a small disclaimer saying that everything you say may just not be true,” Noyb lawyer Joakim Söderberg said.
Since Holmen’s search in August 2024, ChatGPT has modified its approach and now looks for pertinent information in recent news.
Noyb informed the BBC that, among other searches he conducted that day, when Holmen entered his brother’s name into the chatbot, it gave “multiple different stories that were all incorrect.”
Although they admitted that earlier searches might have shaped the response regarding his children, they asserted that OpenAI “doesn’t reply to access requests, which makes it impossible to find out more about what exact data is in the system” and that substantial language models are a “black box.”
Noyb filed a complaint against ChatGPT last year in Austria, claiming that the “hallucinating” flagship AI tool has invented wrong answers that OpenAI cannot correct.
Is this the first case?
No.
One of the primary issues computer scientists are attempting to address with generative AI is hallucinations, which occur when chatbots pass off inaccurate information as fact.
Apple halted its Apple Intelligence news summary feature in the UK earlier this year after it offered fictitious headlines as legitimate news.
Another example of hallucination was Google’s AI Gemini, which last year recommended using glue to adhere cheese to pizza and stated that geologists advise people to consume one rock daily.
The reason for these hallucinations in the significant language models — the technology that powers chatbots — is unclear.
“This is an area of active research. How do we construct these chains of reasoning? How do we explain what is happening in a large language model?” Simone Stumpf, professor of responsible and interactive AI at the University of Glasgow, told BBC that this also holds for those who work on these kinds of models behind the scenes.
“Even if you are more involved in developing these systems quite often, you do not know how they work or why they’re coming up with this particular information that they came up with,” she told the publication.