Futures
Hundreds of contracts settled in USDT or BTC
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Futures Kickoff
Get prepared for your futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to experience risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
AI agents are now helping you make money, but the difficult part is…
Author: Vaidik Mandloi
Original Title: Know Your Agent
Translation and Compilation: BitpushNews
The promise that AI agents will reshape the internet landscape is gradually becoming reality. They have moved beyond experimental tools in chat windows to become an essential part of our daily operations—cleaning inboxes, scheduling meetings, responding to support tickets. They are quietly boosting productivity, often without people noticing.
But this growth is not just hype.
By 2025, autonomous traffic will surpass human traffic, accounting for 51% of total online activity. AI-driven traffic on U.S. retail websites alone has increased by 4700% year-over-year. AI agents now operate across systems; many can access data, trigger workflows, and even initiate transactions.
However, trust in fully autonomous agents has dropped from 43% to 22% within a year, largely due to rising security incidents. Nearly half of enterprises still use shared API keys for agent authentication, a method never designed for autonomous systems to transfer value or act independently.
The problem is: the pace of agent expansion outstrips the infrastructure meant to govern them.
In response, new protocol layers are emerging. Stablecoins, card network integrations, and native standards like x402 are enabling machine-initiated transactions. Meanwhile, new identity and verification layers are being developed to help agents recognize themselves and operate within structured environments.
But enabling payments doesn’t equate to enabling an economy. Once agents can transfer value, deeper issues surface: How do they discover suitable services in a machine-readable way? How do they prove identity and authorization? How can we verify that the actions they claim to have performed actually occurred?
This article explores the infrastructure needed for large-scale, agent-driven economies and assesses whether these layers are mature enough to support persistent, autonomous participants operating at machine speed.
Agents Can’t Buy What They Can’t See
Before an agent can pay for a service, it must first find that service. This sounds simple, but it’s currently the most frictional part.
The internet is built for humans to read pages. When humans search for content, search engines return ranked links. These pages are optimized for persuasion. They’re filled with layouts, trackers, ads, navigation bars, and stylistic elements—meaningful to humans but mostly “noise” to machines.
When an agent requests the same page, it receives raw HTML. A typical blog post or product page might contain around 16,000 tokens in this form. Converting it into clean Markdown reduces the token count to about 3,000. That’s an 80% reduction in content the model needs to process. For a single request, this difference may be negligible. But when agents make thousands of such requests across multiple services, the cumulative overhead leads to delays, higher costs, and increased inference complexity.
@Cloudflare
Ultimately, agents spend significant compute resources stripping away interface elements to access the core information needed to act. This effort doesn’t improve output quality; it merely compensates for a web designed without their needs.
As agent-driven traffic grows, this inefficiency becomes more apparent. AI crawlers on retail and software sites have surged over the past year, now constituting a large portion of total web activity.
Meanwhile, about 79% of major news and content sites block at least one AI crawler. From their perspective, this is understandable. Agents extract content without engaging with ads, subscriptions, or traditional conversion funnels. Blocking them protects revenue.
The problem is: the web lacks reliable ways to distinguish malicious scrapers from legitimate procurement agents. Both appear as automated traffic, both originate from cloud infrastructure, and to the system, they look identical.
Deeper still, agents aren’t trying to “consume” pages—they’re trying to discover actionable opportunities.
When humans search “flights under $500,” a ranked list of links suffices. They compare options and decide. When agents receive the same instruction, they need something entirely different: knowledge of which services accept bookings, input formats, pricing mechanisms, and whether payments can be settled programmatically. Few services openly publish this information clearly.
@TowardsAI
This is why the shift is happening from search engine optimization (SEO) to agent-oriented discoverability, often called AEO. If the end user is an agent, ranking on search pages becomes less important. What matters is whether services can describe their capabilities in a machine-readable way that agents can interpret without guesswork. If not, they risk becoming “invisible” in the growing economic activity.
Agents Need Identity
@Hackernoon
Once agents can discover services and initiate transactions, the next major challenge is establishing who they are interacting with—identity.
Today’s financial systems rely heavily on machine identities. In finance, non-human identities outnumber human ones by roughly 96 to 1. API keys, service accounts, automation scripts, and internal agents dominate institutional infrastructure. Most were never designed to have discretion over capital. They execute predefined instructions but cannot negotiate, choose vendors, or initiate payments on open networks.
Autonomous agents blur this boundary. If an agent can directly move stablecoins or trigger checkout flows without manual confirmation, the core question shifts from “Can it pay?” to “Who authorized it to pay?”
This is where identity becomes foundational, giving rise to the concept of “Know Your Agent” (KYA).
Just as financial institutions verify clients before allowing trading, services interacting with autonomous agents must verify three things before granting access to capital or sensitive operations:
These checks form an identity stack:
Meanwhile, universal commerce protocols like UCP, led by Google and Shopify, enable merchants to publish “capability lists” that agents can discover and negotiate. These act as orchestration layers, expected to integrate into Google Search and Gemini.
@FintechBrainfood
A key subtlety is that permissioned and permissionless systems will coexist.
On public blockchains, agents can transact without centralized gatekeeping, increasing speed and composability but also regulatory pressure. Stripe’s acquisition of Bridge highlights this tension. Stablecoins enable instant cross-border transfers, but compliance obligations don’t vanish just because settlement occurs on-chain.
This tension inevitably involves regulators. Once autonomous agents can initiate financial transactions and interact with markets without direct human oversight, accountability becomes unavoidable. Financial systems cannot allow capital to flow through unrecognized or unauthorized actors—even if those actors are code snippets.
Regulatory frameworks are being adopted. Colorado’s AI Act, effective February 1, 2026, introduces accountability requirements for high-risk automated systems, with similar legislation advancing globally. As agents begin executing financial decisions at scale, identity will no longer be optional. If discoverability makes agents visible, identity is what makes them recognizable and accountable.
Verifying Agent Execution and Reputation
Once agents start performing tasks involving money, contracts, or sensitive data, merely having an identity isn’t enough. A verified agent can still hallucinate, distort its work, leak information, or perform poorly.
The key question then becomes: how can we prove that an agent actually completed what it claimed?
If an agent states it analyzed 1,000 documents, detected fraud patterns, or executed trades, there must be a way to verify that this computation indeed occurred and that the output wasn’t forged or corrupted. For this, we need a performance layer.
Currently, three approaches exist:
These mechanisms address the same core problem from different angles. But proof of execution is often episodic. They verify individual tasks, but markets need cumulative reputation.
Reputation transforms isolated proofs into a long-term performance history. Emerging systems aim to make agent performance portable and cryptographically anchored, rather than relying on platform-specific ratings or opaque internal dashboards.
Ethereum Attestation Service (EAS) allows users or services to publish signed, on-chain attestations about agent behavior. Successful task completion, accurate predictions, or compliant transactions can be recorded immutably and carried across applications.
@EAS
Competitive benchmarking environments are also emerging. Agent Arenas evaluate agents on standardized tasks, using Elo or similar scoring systems. Recall Network reports over 110,000 participants generating 5.88 million predictions, creating measurable performance data. As these systems expand, they resemble real rating markets for AI agents.
This enables reputation to be portable across platforms.
In traditional finance, agencies like Moody’s rate bonds to signal creditworthiness. The agent economy will need an equivalent layer to rate non-human actors. Markets will evaluate whether an agent is sufficiently reliable to entrust with capital, whether its outputs are statistically consistent, and whether its behavior remains stable over time.
Conclusion
As agents gain real authority, markets will require a clear way to assess their reliability. Agents will carry portable performance records based on verified execution and benchmarking, with scores adjusted for quality and traceable to explicit authorizations. Insurers, merchants, and compliance systems will rely on this data to decide which agents can access capital, data, or regulated workflows.
In sum, these layers are beginning to form the infrastructure of the agent economy: