Anthropic Says Chinese-Linked Hackers Used AI to Automate Global Cyber Attacks
BusinessNov 14, 20253 min readAli Hamza

Anthropic Says Chinese-Linked Hackers Used AI to Automate Global Cyber Attacks

AI firm Anthropic alleges that Chinese state-backed hackers misused its Claude chatbot to automate cyber attacks on nearly 30 global organisations, though cybersecurity experts question the evidence and warn against overhyped claims.

The artificial intelligence firm Anthropic said hackers linked to the Chinese government used its AI chatbot, Claude, to automate a string of cyber attacks against around 30 major organizations globally. The company said the attackers posed as valid cybersecurity experts and tricked the chatbot into carrying out tasks that contributed to an advanced espionage operation.

Anthropic called the incident the "first reported AI-orchestrated cyber espionage campaign," but cybersecurity experts are disputing that characterization and its implications.

How the AI-Assisted Attacks Unfolded

Anthropic said it detected the malicious activity in mid-September. The attackers reportedly walked the chatbot through a series of small, automated steps that together created a complex cyber campaign. The human operators chose the targets - major technology companies, financial institutions, chemical companies, and government agencies - but the attackers needed Claude's coding help to construct a program that would automatically compromise those targets.

The company says the AI-powered tool broke into several unnamed organisations, extracted sensitive information, and automatically filtered it to identify valuable data. Anthropic says it has since terminated access of the hackers to the application, notified the affected parties, including relevant law enforcement agencies.

Experts Raise Doubts

The allegations were serious, but some cybersecurity specialists were not convinced: Martin Zugec of Bitdefender argued the report lacked detailed evidence to independently confirm Anthropic’s claims.

"Anthropic’s report makes a number of bold, speculative claims but doesn’t provide verifiable threat intelligence," added Zugec, who said that it requires more transparency in order to properly assess the actual danger posed through AI-driven attacks.

Critics specializing in cybersecurity have also warned against hyping the capability of contemporary AI systems to independently carry out hacking operations, claiming the technology is still limited and error-prone.

Growing Concerns Over AI Misuse

The announcement from Anthropic is among the most high-profile examples of AI companies reporting malicious uses of their tools; it is not, however, the first. In early 2024, OpenAI and Microsoft reported that state-linked groups, including actors from China, had attempted to use AI models for code debugging, translation, and basic research.

But Anthropic did not detail how it determined the attackers involved in this incident were affiliated with the Chinese government. The Chinese embassy in the U.S. also denied involvement when asked by reporters.

Warnings of overhype by the industry

As the role of AI continues to expand in cybersecurity, companies have faced criticism for exaggerating the threat to peddle their defensive products. Recently, Google researchers highlighted the potential risks of AI-generated malware, but ultimately concluded that current tools were still experimental and largely ineffective.

Anthropic itself acknowledged that Claude made numerous mistakes during the attempts of the hackers, including generating fake credentials and falsely claiming that he had obtained secret information when it was actually publicly available. These limitations, the company noted, remain major barriers to fully autonomous cyberattacks.

AI: Both the Problem and the Solution

In its blog post, Anthropic made the case that the best defense against AI-enabled hackers is the deployment of AI-powered cybersecurity tools. The company insists that the same capabilities that can be leveraged to commit attacks make AI a necessary component in identifying and neutralizing digital threats. As the industry debates the risks and realities of AI-driven hacking, the incident highlights a growing challenge: how to ensure powerful AI systems cannot be easily weaponized while being harnessed for defense.

Tags:
newscybersecurityai-securitychinahackersanthropicai-threatsglobal-attackstechnologycyber-espionagedata-breachlatest-updates

Source: BBC

More Related Articles