- Analisis
- Berita Dagangan
- AI That Steals Faster Than You Can Audit
AI That Steals Faster Than You Can Audit

The era of manual auditing in DeFi is ending. GPT-5 and Claude's can autonomously identify and exploit vulnerabilities in Ethereum smart contracts. The future integrity of DeFi sector now hinges on the industry’s capacity to deploy defensive AI at the speed of the threat.
How AI Executes the Exploit
A smart contract is a self-executing program, written mostly in Solidity for the Ethereum Virtual Machine. It operates like this - input funds, meet conditions, get output. A vulnerability is a coding flaw that allows an attacker to bypass the intended conditions.
GPT-5 and Claude Opus 4.5 function as Agentic Exploit Generators using a sophisticated iterative process:
- The AI agent is given a target - contract address, block number. It uses specialized tools to fetch the source code, the contract's ABI (Application Binary Interface), and the current on-chain state.
- The model analyzes the code and state for known weakness patterns. It then synthesizes an exploit Proof-of-Concept as a new malicious Solidity contract.
- The PoC is run in a simulated blockchain environment (e.g., a forked network using Foundry). This is the crucial step. If the exploit fails, the AI analyzes the transaction trace and revert reason, using this feedback to refine and generate a new, optimized exploit script.
- The agent only reports success if the exploit results in a net positive revenue.
Researchers have successfully used these agents to reproduce exploits for hundreds of historical vulnerabilities on a benchmark, collectively generating simulated exploits worth $4.6 million. In simulations against recently deployed, unaudited contracts, the agents successfully uncovered zero-day vulnerabilities, flaws previously unknown to anyone.
The Economic Asymmetry and the Audit Crisis
Manual Audit is not scalable against this new level of threat. Human auditors are struggling to keep pace, which is reflected in the market: Chainalysis reports that DeFi related hacks accounted for a dominant percentage of all crypto theft in 2024, a trend that AI exploitation will likely accelerate. This increased risk jeopardizes user adoption and market value for the entire Ethereum ecosystem.
The most famous DAO hack in 2016. A reentrancy bug occurs when a contract sends Ether to an external address and then updates its internal state after the external call. The attacker's contract includes a fallback function that, when receiving the Ether, calls the original contract again to withdraw more funds before the balance is reduced. This allows the attacker to loop the withdrawal multiple times. AI can easily model and execute this multi-transaction sequence, which is complex for static analysis alone.
Key Takeaways
- Modern LLM agents can autonomously generate, test, and refine profitable smart contract exploits.
- Human-paced reviews cannot keep up with iterative, feedback-driven exploit generation.
- One successful exploit pays for infinite failed ones; defenders must be right every time.
- Many high-impact exploits (reentrancy, state-dependent logic flaws) only emerge through dynamic execution.
- AI agents are uncovering vulnerabilities that no human has previously documented or recognized.







