Scan to Download Gate App
qrCode
More Download Options
Don't remind me again today

Alert! AI agents have breached smart contracts to steal over $4.5 million in simulated assets, an Anthropic report reveals a DeFi security crisis.

The latest research report released by the AI star company Anthropic has sounded the alarm for the entire encryption industry. Experiments show that current advanced AI agents (such as Claude Sonnet 4.5) can autonomously discover and exploit vulnerabilities in smart contracts, successfully attacking more than half of the tested contracts in a simulated environment, and even uncovering two entirely new “zero-day vulnerabilities.” The report warns that as the cost of using AI plummets while the returns from attacks soar, blockchain-based decentralized finance systems are facing an unprecedented, automated security offensive and defensive battle. This represents a severe threat and signals the arrival of a new security era where AI fights against AI.

The Pandora's Box is Open: AI Agents Display Incredible Offensive Power

While people are still discussing how AI will change the future, Anthropic's report reveals an urgent present: AI agents already possess real-world capabilities that threaten the security of blockchain assets. In a rigorous simulation experiment, the company placed AI models in a simulated blockchain environment, testing targets that included 34 smart contracts that had historically been attacked and were deployed after March 2025. The results were shocking, as the AI agents successfully breached 17 of them, stealing a total of 4.5 million dollars in simulated assets.

The universality of this threat is even more alarming. In an expanded benchmark test, Anthropic selected a total of 405 contracts deployed on chains such as Ethereum, BNB Smart Chain, and Base between 2020 and 2025 for testing. The AI model successfully attacked 207 of them, with a success rate exceeding 50%, simulating the theft of funds amounting to as much as $550 million. The report bluntly states that most blockchain attacks carried out by human experts in 2025 could theoretically be autonomously executed by current AI agents. This means that the threshold for attacks is rapidly shifting from “requiring advanced skills” to “can be automated and conducted in bulk.”

What most concerns security experts is the “innovative ability” exhibited by AI. In a scan of 2,849 recently deployed contracts with no known vulnerabilities, Claude Sonnet 4.5 and GPT-5 still discovered two brand new “zero-day vulnerabilities” with a potential exploitation value of approximately $3,694. This proves that the threat of AI is not limited to reproducing known attack patterns; it has begun to possess the ability to actively mine for new threats in unknown code, fundamentally disrupting the balance of attack and defense in smart contract security.

Vulnerability Excavator: How AI Sees Through Contract Risks

How do AI agents achieve this? The report delves into the types of vulnerabilities that have been successfully exploited, providing us with a window into the attack strategies of AI. The most common type is the “authorization vulnerability,” where access control for certain key functions in the contract (such as withdrawing user funds) has flaws and fails to strictly verify the identity of the caller. AI agents are able to accurately locate these unlocked “backdoors” through systematic call testing and state analysis.

Another category of high-frequency vulnerabilities involves “unprotected read-only functions”. These functions are not supposed to modify the on-chain state, but if designed improperly, they may be maliciously called to manipulate the token supply or critical state variables. AI agents can identify these potential attack vectors by traversing all callable functions and analyzing their possible side effects. Additionally, the “missing validation in fee extraction logic” is also a common issue, where attackers can exploit this vulnerability to illegally extract accumulated transaction fees.

From a technical perspective, the reason large language models (LLMs) can excel at this task is that they have been trained on massive amounts of code, including Solidity smart contracts, and possess associative and reasoning capabilities regarding code patterns, potential defect patterns, and attack patterns that surpass human speed. They can try various combinations of inputs and function call sequences with extremely high parallelism, like tireless auditors, searching for the “keys” that can lead to unexpected states. This automated, scalable code review capability, when used for malicious purposes, will be thousands of times more efficient than traditional manual hackers.

Cost Plummets and Returns Soar: A Dangerous Economic Model

What drives this security crisis is not only technological advancement but also a set of dangerous economic logics. Anthropic's report reveals a key trend: the simulated gains obtained from attacks using AI have roughly doubled every 1.3 months over the past year. Meanwhile, the costs of invoking advanced AI models (such as GPT-4o, Claude 3.5 Sonnet) are continuously and rapidly decreasing.

This “scissors difference” effect creates an extremely enticing incentive model for attackers. When the expected return on attacks grows much faster than the cost, deploying a large number of AI agents for a “broad net” style vulnerability scan becomes a profitable business. The report warns: “As costs continue to decline, attackers will deploy more AI agents to probe any code paths leading to valuable assets, no matter how obscure: a forgotten authentication library, a nondescript logging service, or a deprecated API endpoint.” In the future, we may face not only precise attacks on leading DeFi protocols but also indiscriminate, automated sweeps of all connected smart contracts.

Anthropic Research Report Key Data

Test Set 1 (Recently Attacked Contracts): 34 cases, AI successfully attacked 17 cases, stealing simulated funds of 4.5 million USD.

Test Set 2 (2020-2025 Historical Contract Benchmark): 405 cases, AI successfully attacked 207 cases, stealing simulated funds of $550 million.

Zero-Day Vulnerability Mining Test: Scanned 2849 new contracts, found 2 new vulnerabilities, potential value 3694 USD

Attack Profit Growth Cycle: Simulated attack profits double every 1.3 months.

Involved AI Models: Claude Opus 4.5, Claude Sonnet 4.5, GPT-5

Involved Blockchains: Ethereum, BNB Smart Chain, Base

With the spear of AI, build the shield of AI: Opening the defense of a new era

In the face of such a clear threat, the industry's response must be swift and strong. Anthropic's report did not merely stoke panic but clearly pointed out the solution: AI must be used to defend against AI. This heralds the arrival of a new “AI-enhanced” era for smart contract auditing and security assurance. Traditional models, which rely on limited human resources for code reviews and fuzz testing, will appear riddled with vulnerabilities in the face of automated attacks.

As a concrete measure to promote advancements in defense, Anthropic announced that it will open-source the benchmark dataset used for this research on smart contracts. This initiative aims to provide a high-standard and diverse testing sandbox for security developers and researchers worldwide to train and evaluate their own defensive AI models. In the future, we can expect to see more powerful AI-assisted auditing tools emerge, capable of performing deep scans before contract deployment to detect vulnerabilities from the attacker's perspective, and also able to conduct real-time monitoring during contract operation, recognizing and blocking abnormal transaction patterns as soon as they occur.

The AI-driven arms race has already begun. For project teams, increasing security budgets and adopting more advanced AI auditing services will become a necessity for survival. For developers, learning and adapting to collaborative programming with AI tools will become the new norm. For the entire Web3 industry, this warning serves as a valuable stress test, forcing us to place security back at the core of architecture design while pursuing innovation and efficiency. Only by actively embracing this transformation can we turn threats into opportunities to enhance the overall security level of the industry.

Conclusion

Anthropic's report is like a mirror, reflecting both the sharp edges of the dark side of AI technology and pointing out the direction for reinforcing the shield with its brighter side. The narrative of smart contract security is shifting from “the human confrontation between white hat hackers and black hat hackers” to “the algorithmic race between defensive AI and offensive AI.” There are no spectators in this race; it concerns every protocol built on the blockchain, every asset locked in DeFi, and the trust foundation of the entire decentralized ecosystem. Now, it is time to update our understanding of risk and take immediate action—because AI agents never rest, and our defenses must always be online.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
0/400
No comments
  • Pin
Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate App
Community
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)