'Hallucinating' ChatGPT Tells Man He Killed His Kids

Digital rights group has filed a complaint with data protection authorities
Posted Mar 21, 2025 10:23 AM CDT
'Hallucinating' ChatGPT Tells Man He Killed His Kids
The OpenAI logo is seen on a mobile phone in front of a computer screen displaying output from ChatGPT.   (AP Photo/Michael Dwyer, File)

A Norwegian man says ChatGPT got his hometown right, and the number and gender of his children—but its other details were false and disturbing. Arve Hjalmar Holmen says that when he asked the chatbot who he was, he was told that he was serving a 21-year prison sentence for murdering two of his sons and trying to kill the third, the BBC reports.

  • "Arve Hjalmar Holmen is a Norwegian individual who gained attention due to a tragic event," ChatGPT told Holmen. "He was the father of two young boys, aged 7 and 10, who were tragically found dead in a pond near their home in Trondheim, Norway, in December 2020." ChatGPT went on to claim that in a case "widely covered in the media due to its tragic nature," Holmen was accused and later convicted of killing the boys.

Holmen, who has lived in Belgium since 2002 and works in a museum, tells Aftenposten that earlier queries on ChatGPT and other AI models also delivered inaccurate answers, including claims that he was a politician or a company CEO, but the murder claim is the one that worries him. "Some think that there is no smoke without fire—the fact that someone could read this output and believe it is true is what scares me the most," he says. Digital rights group Noyb has filed a complaint with the Norwegian Data Protection Authority on his behalf, saying the false and defamatory information violated data protection rules.

Noyb is asking for OpenAI, the company behind ChatGPT, to be fined and ordered to improve its model to prevent this kind of error, the Verge reports. AI chatbots are known to "hallucinate" by presenting false information as facts, but Noyb says a disclaimer saying ChatGPT can make mistakes isn't good enough. "You can't just spread false information and in the end add a small disclaimer saying that everything you said may just not be true," says Joakim Söderberg, a data protection lawyer at Noyb. In a statement, OpenAI said it is working on ways to reduce hallucinations and Holmen's complaint relates to a version of Chat GPT "which has since been enhanced with online search capabilities that improves accuracy." (More ChatGPT stories.)

Get the news faster.
Tap to install our app.
X
Install the Newser News app
in two easy steps:
1. Tap in your navigation bar.
2. Tap to Add to Home Screen.

X