Italy fines OpenAI €15 million over data privacy violations

Italy’s Data Protection Authority (Garante) has fined OpenAI €15 million after investigating ChatGPT’s handling of personal data. The probe concluded that OpenAI used personal data to train its AI system without an adequate legal basis, violating European transparency laws. Additionally, OpenAI failed to implement proper age verification to protect users under 13 from inappropriate AI-generated content.

The Garante also criticised OpenAI for not sufficiently informing users and non-users about how their personal data was being utilised. The authority has ordered OpenAI to launch a six-month public awareness campaign in Italy to educate citizens on their data rights under the General Data Protection Regulation (GDPR).

The watchdog stated, “ChatGPT users and non-users should be aware of how to oppose the use of their data for AI training.” This campaign aims to empower individuals to exercise their data protection rights.

In response, OpenAI has described the decision as “disproportionate” and confirmed plans to appeal the ruling. A company spokesperson noted the fine is nearly 20 times OpenAI’s annual revenue in Italy. Despite this, the company pledged to collaborate with privacy regulators globally to ensure its AI systems respect user privacy.

The investigation follows Garante’s temporary block on ChatGPT in 2023 over privacy concerns. Meanwhile, global regulators, particularly in the US and Europe, are tightening oversight on AI technologies. The European Union’s upcoming AI Act aims to establish comprehensive guidelines to address risks associated with artificial intelligence systems.