Anthropic AI tool linked to major cyberattack

Anthropic has reported a major cyber incident involving state-backed hackers from China. The company said the attackers used its Claude Code tool to mount automated strikes on global networks. Officials described it as the first known espionage campaign driven mostly by artificial intelligence.

Anthropic stated it held strong confidence that the group was linked to the Chinese state. The attackers attempted around thirty operations against governments and major firms across several sectors. A few attempts succeeded, though Anthropic declined to identify the affected organisations.

The company said the hackers aimed to steal sensitive material and organise it for further exploitation. They allegedly fooled Claude Code by claiming they needed help with authorised cybersecurity tests. This trick allowed the system to carry out harmful automated tasks without raising early alarms.

Anthropic reported that the AI handled most of the operation with little human guidance. Experts warned this suggests hostile groups are already using advanced models in real-world campaigns. Graeme Stewart from Check Point Software Technologies said such activity showed these actors were operational.

The company detected the activity in mid-September and quickly opened an investigation. Within ten days, it blocked the group’s access and contacted targets and law enforcement bodies. Anthropic said attacks of this style will likely grow stronger as tools develop further.

The firm has expanded monitoring systems to catch suspicious automated behaviour in the future. It is also building new methods to track large and distributed attacks across different platforms. Cybersecurity specialists warned that other AI assistants could be abused in similar criminal schemes.

They argued determined attackers may misuse any widely adopted model if they apply enough pressure effectively.