OpenAI faces GDPR complaint over false ChatGPT response

A European privacy rights group has filed a complaint against OpenAI, accusing the company of violating Europe’s General Data Protection Regulation (GDPR). The complaint, submitted to the Norwegian Data Protection Authority, follows an incident in which OpenAI’s ChatGPT falsely claimed that a Norwegian man had been convicted of murdering his children.

The man, Arve Hjalmar Holmen, had asked ChatGPT about himself, only to receive a fabricated response stating that he had been convicted of killing two of his sons and attempting to murder a third. While some details were correct—such as the number and gender of his children and his hometown—the core accusation was entirely false.

This error highlights a known issue with AI chatbots, referred to as “hallucinations,” where models generate misleading or incorrect information. These inaccuracies can arise from biased or incomplete training data. The Austria-based privacy advocacy group Noyb, which filed the complaint, argues that OpenAI’s model still contains false information and that users have no way of ensuring such errors are permanently removed.

OpenAI has since updated its ChatGPT model to conduct real-time searches when responding to queries about individuals. As a result, ChatGPT no longer falsely states that Holmen committed a crime. However, Noyb contends that the incorrect information may still exist within the model and could resurface.

Noyb’s complaint alleges that OpenAI has breached GDPR Article 5 (1)(d), which requires companies to ensure personal data is accurate and up to date. The group is urging Norwegian authorities to order OpenAI to delete defamatory content, refine its AI model to prevent inaccuracies, and impose a fine to deter future violations.

Holmen, in a statement, expressed concern about the reputational damage AI errors can cause. “Some think that ‘there is no smoke without fire,’” he said. “The fact that someone could believe this false information is what scares me the most.”

In response, OpenAI stated that it continues to refine its models to improve accuracy and reduce misinformation. The company also pointed out that users can request corrections or deletions through OpenAI’s Privacy Center, a mechanism aligned with GDPR rights.

Despite OpenAI’s safety measures, Noyb maintains that AI companies must take greater responsibility. “Adding a disclaimer that you do not comply with the law does not make the law go away,” said Kleanthi Sardeli, a data protection lawyer at Noyb. “If hallucinations are not stopped, people can easily suffer reputational damage.”