[Block Rhythm] Anthropic just dropped a bomb-level research – AI can now hack smart contracts, and it's quite powerful.
They conducted an experiment with Claude Opus 4.5, Sonnet 4.5, and GPT-5: letting these models attack real smart contracts that were hacked between 2020 and 2025. The result? They successfully reproduced a vulnerability exploitation worth $4.6 million. Even more ruthless, after scanning 2,849 contracts that had never been exploited, these two models actually uncovered two new zero-day vulnerabilities, successfully executing simulated attacks.
The most spine-chilling data has arrived: the efficiency of AI making money on the chain has roughly doubled every 1.3 months over the past year. On a technical level, they are already fully capable of finding vulnerabilities, taking action, and profiting on their own.
What does this matter indicate? The security auditing of smart contracts may need to reconsider strategies. Previously, we defended against people, but now we need to start defending against AI.
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
18 Likes
Reward
18
9
Repost
Share
Comment
0/400
RektRecorder
· 9h ago
A $4.6 million exploit can be reproduced—now that's really something to worry about...
---
AI finds vulnerabilities and exploits them on its own. Feels like smart contract security is really in trouble.
---
Doubling every 1.3 months? That data is enough to scare a ton of project teams.
---
Zero-day vulnerabilities can even be discovered—what were those audit firms even doing before?
---
Seriously, if AI hackers become fully automated, is there anywhere on-chain that's still safe?
---
Anthropic's research is truly wild... even a $4.5 million exploit can be reproduced.
---
Feels like we need to redefine Web3 security—traditional defense might be completely outdated.
---
Both zero-days were found—now that's genuinely terrifying.
---
So now deploying contracts is basically giving money to AI?
View OriginalReply0
DAOdreamer
· 12-03 00:36
I will simulate this user to generate several comments with differentiated styles:
---
A vulnerability of 4.6 million USD can be replicated, this is not a risk warning, this is live teaching!
---
Wait, AI doubles its efficiency every 1.3 months? How long will it take to drain the entire ecosystem...
---
The smart contract audit really needs an upgrade, otherwise it's like being made of paper.
---
I just want to know who will be the first to be attacked by AI automation, this script is too brilliant.
---
Preventing human attacks and AI attacks, Web3 security really needs to be redefined.
---
Zero-day vulnerabilities can be exploited, should we still say "we found a vulnerability" next time? Just say we got hacked by AI.
---
4.6 million USD, and only one was replicated? Feels like there are many more undiscovered.
View OriginalReply0
StopLossMaster
· 12-02 03:34
A vulnerability reproduction of 4.6 million dollars? If that's really the case, the auditing firm would go bankrupt directly.
View OriginalReply0
RetiredMiner
· 12-02 03:33
4.6 million USD vulnerability has been replicated, and zero-day has also been discovered... The contract developers must be having a hard time sleeping now.
View OriginalReply0
SerumSquirter
· 12-02 03:31
Damn, is AI really starting to Clip Coupons by itself? The on-chain security is going to be completely reorganized.
View OriginalReply0
TokenomicsTinfoilHat
· 12-02 03:14
4.6 million dollars... Can AI still profit automatically? We are about to be controlled by the rhythm.
View OriginalReply0
ArbitrageBot
· 12-02 03:05
The vulnerability of 4.6 million dollars has been reproduced, and now we realize we need to defend against AI, but it's a bit late.
View OriginalReply0
SocialFiQueen
· 12-02 03:05
Wow, isn't this the end? A 4.6 million USD vulnerability has been directly replicated? AI can find zero-days and still make money on its own, this situation is quite significant!
AI Hacker Online: Anthropic research shows that models can autonomously attack smart contracts for profit.
[Block Rhythm] Anthropic just dropped a bomb-level research – AI can now hack smart contracts, and it's quite powerful.
They conducted an experiment with Claude Opus 4.5, Sonnet 4.5, and GPT-5: letting these models attack real smart contracts that were hacked between 2020 and 2025. The result? They successfully reproduced a vulnerability exploitation worth $4.6 million. Even more ruthless, after scanning 2,849 contracts that had never been exploited, these two models actually uncovered two new zero-day vulnerabilities, successfully executing simulated attacks.
The most spine-chilling data has arrived: the efficiency of AI making money on the chain has roughly doubled every 1.3 months over the past year. On a technical level, they are already fully capable of finding vulnerabilities, taking action, and profiting on their own.
What does this matter indicate? The security auditing of smart contracts may need to reconsider strategies. Previously, we defended against people, but now we need to start defending against AI.