- Advertisement -
22.4 C
New York
Thursday, September 11, 2025
- Advertisement -

Chatbot’s Crime Spree Used AI to Grab Bank Details, Social Security Numbers

Chatbot’s Crime Spree Used AI to Grab Bank Details, Social Security Numbers

A hacker has exploited a leading artificial intelligence chatbot to orchestrate the most extensive and profitable cybercriminal scheme involving AI to date, according to a new report from Anthropic, the company behind the popular Claude chatbot.

Anthropic declined to identify all 17 victim companies but confirmed that they included a defense contractor, a financial institution, and multiple healthcare providers.

The breach resulted in the theft of sensitive data including Social Security numbers, bank details, and confidential medical records, Anthropic said. The hacker also accessed files related to sensitive U.S. defense information regulated under the International Traffic in Arms Regulations (ITAR).

How much did hackers get out of Claude’s targets?

It remains unclear how much the hacker extorted or how many firms paid, but demands ranged from approximately $75,000 to over $500,000, the report said. The operation, which lasted over three months, involved malware deployment, data analysis, and targeted extortion efforts.

Jacob Klein, head of threat intelligence for Anthropic, said that the campaign appeared to come from an individual hacker outside of the U.S.

“We have robust safeguards and multiple layers of defense for detecting this kind of misuse, but determined actors sometimes attempt to evade our systems through sophisticated techniques,” he said.

How did hackers use AI to launch this chatbot crime spree?

According to the company’s threat analysis, the attack began with the hacker convincing Claude to identify companies vulnerable to attack. Claude, which specializes in generating code based on simple prompts—a process known as “vibe coding”—was instructed to pinpoint targets with exploitable weaknesses.

Anthropic says the hacker then had the chatbot create malicious software designed to extract sensitive information such as personal data and corporate files from the victims. Once stolen, Claude categorized and analyzed the data to determine what was most valuable and could be leveraged for extortion.

For the hacker, the chatbot’s built-in analysis tools certainly helped. Anthropic said that Claude even evaluated the compromised financial documents, helping the attacker estimate a realistic ransom amount in Bitcoin, and drafted threatening emails demanding payment in exchange for not releasing or exploiting the stolen data.

Can we expect more chatbot criminals?

Probably. Hackers have historically been very good at learning and then manipulating technology to find the most lucrative or effective ways to use it for a specific goal they have.

More broadly, the case underscores the risks both users and investors in the sector take when they use AI. As the largely unregulated AI industry becomes more intertwined with cybercrime, with recent data showing hackers increasingly leveraging AI tools to facilitate scams, ransomware, and data breaches.

Recently, that has meant that hackers have used a variety of AI specialized tools to get what they want, including using chatbots for things like writing phishing emails, like they did in this NASA scheme.

“We already see criminal and nation-state elements utilizing AI,” NSA Cybersecurity Director Rob Joyce said earlier this year. “We’re seeing intelligence operators, we’re seeing criminals on those platforms.”

#Chatbots #Crime #Spree #Grab #Bank #Details #Social #Security #Numbers

LEAVE A REPLY

Please enter your comment!
Please enter your name here

- Advertisement -

Stay Connected

0FansLike
0FollowersFollow
0SubscribersSubscribe
- Advertisement -

Latest Articles