The future of AI is safe for now, as Anthropic has reportedly prevented a cyber criminal from using its AI platform Claude to generate scams and hacks to infiltrate a variety of companies.
The evolution of AI over the last few years has made cybersecurity far more difficult in 2025. The technology is being used for a wide range of nefarious purposes, making it infinitely harder to spot scams and deter hacks.
Luckily, Anthropic was on top of this one, announcing that it had swiftly put a stop to a plan that could have had some seriously negative consequences for the companies in the crosshairs.
Anthropic Stops Hackers from Using Claude AI to Hack Companies
According to an announcement from Anthropic on Wednesday, the AI company “disrupted a sophisticated cybercriminal that used Claude Code to commit large-scale theft and extortion of personal data.”
“We’ve developed sophisticated safety and security measures to prevent the misuse of our AI models. But cybercriminals and other malicious actors are actively attempting to find ways around them.” – Anthropic announcement
The attack reportedly targeted as many as 17 different companies, in industries including “healthcare, the emergency services, and government and religious institutions.”
How Did the Hacker Use Claude AI?
This was no ordinary hack attempt. According to Anthropic, the hacker in question used the technology to “an unprecedented degree” to create “attacks that would otherwise have required a team of operators.” As for what that hacker actually used it for, the list was seemingly endless.
Here’s a list of some of the criminal actions that the hacker was attempting to perform with Claude AI, according to Anthropic:
This just in! View
the top business tech deals for 2025 👨💻
- Automate reconnaissance
- Harvest victims’ credentials
- Penetrate networks
- Make both tactical and strategic decisions
- Decide which data to exfiltrate
- Craft psychologically targeted extortion demands
- Analyze financial data to determine appropriate ransom amounts
- Generated visually alarming ransom notes
Suffice to say, this hacker was on a mission to use AI in the worst possible way in an effort to hack these companies. Luckily, Anthropic was on the case.
How Did Anthropic Stop the Hacker?
Given the complicated nature of the attack and the industries that could have been impacted, it’s a good thing Anthropic was able to prevent this hack before it came to fruition. But how exactly did Anthropic stop these hackers in their tracks?
Well, for starters, Anthropic “banned the accounts in question as soon as we discovered this operation.”
“We have robust safeguards and multiple layers of defense for detecting this kind of misuse, but determined actors sometimes attempt to evade our systems through sophisticated techniques.” – Jacob Klein, head of threat intelligence for Anthropic to NBC News
On top of that, Anthropic has developed a new screening tool in hopes of spotting these kinds of nefarious actions earlier in the process, and the company also shared data with the authorities to help them prevent attacks in the future.