Privacy Group Files Complaint Against OpenAI Over False Claim

Privacy Group Files Complaint Against OpenAI Over False Claim

31 views

A European privacy group has filed a legal complaint against OpenAI. The complaint follows a false claim made by ChatGPT about a Norwegian man. ChatGPT wrongly stated that the man, Arve Hjalmar Holmen, was convicted of murder. The privacy group argues that OpenAI violated Europe’s General Data Protection Regulation (GDPR) by sharing incorrect personal data.

The False Claim by ChatGPT

The incident began when Holmen asked ChatGPT about himself. The response was shocking. It falsely claimed that Holmen had been convicted of killing his two sons and trying to kill a third. The AI also said he was given a 21-year prison sentence. However, none of this was true. While some personal details were correct, such as Holmen’s hometown and his children’s gender and number, the accusation was entirely false.

This error, known as an “AI hallucination,” is common in large language models like ChatGPT. Hallucinations occur when an AI model generates wrong or misleading information. This usually happens because the AI has been trained on flawed or biased data. The mistake is harmful, as it can damage people’s reputations. Holmen’s case is a clear example of how AI errors can cause lasting damage to someone’s life.

The Legal Complaint

Noyb, an Austrian privacy group, has taken legal action against OpenAI. The group claims that OpenAI broke the GDPR. Article 5(1)(d) of the GDPR requires companies to keep personal data accurate. Noyb says that OpenAI did not make sure the information shared by ChatGPT was correct.

Since the complaint was filed, OpenAI has updated its AI model. The company has ensured that ChatGPT will no longer make the false claim about Holmen. However, Noyb remains concerned that the false information might still exist in OpenAI’s system. ChatGPT learns from user input, so Holmen cannot be sure that the incorrect data is entirely erased or if it will reappear.

What Noyb Wants

Noyb has made several requests in its complaint. First, it wants OpenAI to delete the false information about Holmen. Second, the group wants OpenAI to improve its AI models so similar mistakes don’t happen in the future. Lastly, Noyb asks that OpenAI pay a financial penalty for violating the GDPR.

Holmen is worried about the impact of the false accusation. He says, “Some people believe ‘there is no smoke without fire.’ That’s what frightens me the most.” Even if people learn the claim was wrong, the damage to his reputation could be long-lasting. Holmen’s case shows how AI errors can hurt people’s reputations, even when the mistakes are later corrected.

OpenAI’s GDPR Compliance

Noyb also criticized OpenAI for not following GDPR rules. Kleanthi Sardeli, a lawyer for Noyb, stated that AI companies can’t ignore privacy laws. She explained, “Adding a disclaimer does not make the law disappear.” Sardeli emphasized that if AI companies don’t fix these types of errors, people will continue to suffer from reputational harm.

This case could set an important legal precedent for how AI companies deal with privacy laws in Europe. If OpenAI is found to have violated GDPR, it could face penalties. The case could also influence how other AI companies handle personal data in the future.

The Bigger Picture

This case is not just about one man. It raises broader questions about how AI systems handle data. AI tools like ChatGPT can process large amounts of information. However, they can also generate false or misleading responses. When these errors happen, they can harm real people. As AI becomes more common, regulators will need to decide how to protect individuals from AI-generated misinformation.

If OpenAI is fined or required to make changes, this could have a big impact on AI regulations. Other companies that use AI could face similar scrutiny. The case could lead to stronger privacy laws for AI companies. It could also prompt companies to improve how they handle personal data.

OpenAI has not yet responded to the complaint, but this case is far from over. Norwegian regulators will decide whether OpenAI broke GDPR rules. If they find that OpenAI violated the law, the company could be fined or ordered to take action. This case could shape how AI companies are held accountable for the information they provide.

For now, Holmen’s experience shows the risks of AI-generated misinformation. As AI continues to develop, there will likely be more cases like this. It’s crucial that AI companies take responsibility for the data they use. They must ensure that their systems don’t harm people’s reputations or privacy.