Hacker breaches OpenAI, accesses internal chats and AI design details

A hacker infiltrated OpenAI’s systems, gaining access to internal chats and potentially stealing details about the company’s AI designs, according to a recent report. However, OpenAI did not report the breach to law enforcement.

The New York Times reported that the hacker accessed an internal forum where OpenAI employees discussed their technologies. Despite this, the intruder did not breach the systems where OpenAI’s products are developed and stored.

OpenAI has been at the forefront of the AI surge, especially after the release of its AI chatbot, ChatGPT, in late 2022. The advent of generative AI has prompted major tech companies to venture into the sector, with experts highlighting it as a significant innovation of our time.

The report indicated that OpenAI executives informed their staff and board about the breach in April of the previous year but kept it from the public as no customer or partner data was compromised. They also did not notify US law enforcement, believing the hacker to be a private individual without ties to foreign governments. OpenAI has not commented on the incident.

Dr. Ilia Kolochenko, a cybersecurity expert and CEO of ImmuniWeb, suggested that attacks on AI firms are likely to escalate due to the technology’s growing importance. He warned that these incidents might be more common than reported and highlighted the risks posed by state-backed cybercriminals targeting AI companies for intellectual property, including technological research, large language models, and commercial information.

Kolochenko advised corporate users of generative AI to be cautious when sharing proprietary data for AI training, as it could be targeted by cybercriminals. He emphasized the need for vigilance, given the increasing focus on AI by various threat actors.