AI agents are now helping you make money, but the difficult part is…

Author: Vaidik Mandloi

Original Title: Know Your Agent

Translation and Compilation: BitpushNews


The promise that AI agents will reshape the internet landscape is gradually becoming reality. They have moved beyond experimental tools in chat windows to become an essential part of our daily operations—cleaning inboxes, scheduling meetings, responding to support tickets. They are quietly boosting productivity, often without people noticing.

But this growth is not just hype.

By 2025, autonomous traffic will surpass human traffic, accounting for 51% of total online activity. AI-driven traffic on U.S. retail websites alone has increased by 4700% year-over-year. AI agents now operate across systems; many can access data, trigger workflows, and even initiate transactions.

However, trust in fully autonomous agents has dropped from 43% to 22% within a year, largely due to rising security incidents. Nearly half of enterprises still use shared API keys for agent authentication, a method never designed for autonomous systems to transfer value or act independently.

The problem is: the pace of agent expansion outstrips the infrastructure meant to govern them.

In response, new protocol layers are emerging. Stablecoins, card network integrations, and native standards like x402 are enabling machine-initiated transactions. Meanwhile, new identity and verification layers are being developed to help agents recognize themselves and operate within structured environments.

But enabling payments doesn’t equate to enabling an economy. Once agents can transfer value, deeper issues surface: How do they discover suitable services in a machine-readable way? How do they prove identity and authorization? How can we verify that the actions they claim to have performed actually occurred?

This article explores the infrastructure needed for large-scale, agent-driven economies and assesses whether these layers are mature enough to support persistent, autonomous participants operating at machine speed.

Agents Can’t Buy What They Can’t See

Before an agent can pay for a service, it must first find that service. This sounds simple, but it’s currently the most frictional part.

The internet is built for humans to read pages. When humans search for content, search engines return ranked links. These pages are optimized for persuasion. They’re filled with layouts, trackers, ads, navigation bars, and stylistic elements—meaningful to humans but mostly “noise” to machines.

When an agent requests the same page, it receives raw HTML. A typical blog post or product page might contain around 16,000 tokens in this form. Converting it into clean Markdown reduces the token count to about 3,000. That’s an 80% reduction in content the model needs to process. For a single request, this difference may be negligible. But when agents make thousands of such requests across multiple services, the cumulative overhead leads to delays, higher costs, and increased inference complexity.

image.png

@Cloudflare

Ultimately, agents spend significant compute resources stripping away interface elements to access the core information needed to act. This effort doesn’t improve output quality; it merely compensates for a web designed without their needs.

As agent-driven traffic grows, this inefficiency becomes more apparent. AI crawlers on retail and software sites have surged over the past year, now constituting a large portion of total web activity.

Meanwhile, about 79% of major news and content sites block at least one AI crawler. From their perspective, this is understandable. Agents extract content without engaging with ads, subscriptions, or traditional conversion funnels. Blocking them protects revenue.

The problem is: the web lacks reliable ways to distinguish malicious scrapers from legitimate procurement agents. Both appear as automated traffic, both originate from cloud infrastructure, and to the system, they look identical.

Deeper still, agents aren’t trying to “consume” pages—they’re trying to discover actionable opportunities.

When humans search “flights under $500,” a ranked list of links suffices. They compare options and decide. When agents receive the same instruction, they need something entirely different: knowledge of which services accept bookings, input formats, pricing mechanisms, and whether payments can be settled programmatically. Few services openly publish this information clearly.

image.png

@TowardsAI

This is why the shift is happening from search engine optimization (SEO) to agent-oriented discoverability, often called AEO. If the end user is an agent, ranking on search pages becomes less important. What matters is whether services can describe their capabilities in a machine-readable way that agents can interpret without guesswork. If not, they risk becoming “invisible” in the growing economic activity.

Agents Need Identity

image.png

@Hackernoon

Once agents can discover services and initiate transactions, the next major challenge is establishing who they are interacting with—identity.

Today’s financial systems rely heavily on machine identities. In finance, non-human identities outnumber human ones by roughly 96 to 1. API keys, service accounts, automation scripts, and internal agents dominate institutional infrastructure. Most were never designed to have discretion over capital. They execute predefined instructions but cannot negotiate, choose vendors, or initiate payments on open networks.

Autonomous agents blur this boundary. If an agent can directly move stablecoins or trigger checkout flows without manual confirmation, the core question shifts from “Can it pay?” to “Who authorized it to pay?”

This is where identity becomes foundational, giving rise to the concept of “Know Your Agent” (KYA).

Just as financial institutions verify clients before allowing trading, services interacting with autonomous agents must verify three things before granting access to capital or sensitive operations:

  1. Cryptographic authenticity: Does this agent truly control the keys it claims?
  2. Delegation permissions: Who authorized this agent, and what are its limits?
  3. Real-world linkage: Is this agent associated with a legally responsible entity?

These checks form an identity stack:

  • Bottom layer: cryptographic keys and signatures. Standards like ERC-8004 formalize how agents register and anchor identities on verifiable chains.
  • Middle layer: identity providers linking keys to real-world entities—companies, financial institutions, verified individuals. Without this binding, signatures only prove control, not accountability.
  • Edge layer: verification infrastructure—payment processors, CDNs, or application servers that validate signatures, check credentials, and enforce permissions in real time. Visa’s Trusted Agent Protocol (TAP) exemplifies permissioned commerce, allowing merchants to verify if an agent is authorized to act on behalf of a user. Stripe’s Agent Commerce Protocol (ACP) is extending similar checks into programmable checkout and stablecoin flows.

Meanwhile, universal commerce protocols like UCP, led by Google and Shopify, enable merchants to publish “capability lists” that agents can discover and negotiate. These act as orchestration layers, expected to integrate into Google Search and Gemini.

image.png

@FintechBrainfood

A key subtlety is that permissioned and permissionless systems will coexist.

On public blockchains, agents can transact without centralized gatekeeping, increasing speed and composability but also regulatory pressure. Stripe’s acquisition of Bridge highlights this tension. Stablecoins enable instant cross-border transfers, but compliance obligations don’t vanish just because settlement occurs on-chain.

This tension inevitably involves regulators. Once autonomous agents can initiate financial transactions and interact with markets without direct human oversight, accountability becomes unavoidable. Financial systems cannot allow capital to flow through unrecognized or unauthorized actors—even if those actors are code snippets.

Regulatory frameworks are being adopted. Colorado’s AI Act, effective February 1, 2026, introduces accountability requirements for high-risk automated systems, with similar legislation advancing globally. As agents begin executing financial decisions at scale, identity will no longer be optional. If discoverability makes agents visible, identity is what makes them recognizable and accountable.

Verifying Agent Execution and Reputation

Once agents start performing tasks involving money, contracts, or sensitive data, merely having an identity isn’t enough. A verified agent can still hallucinate, distort its work, leak information, or perform poorly.

The key question then becomes: how can we prove that an agent actually completed what it claimed?

If an agent states it analyzed 1,000 documents, detected fraud patterns, or executed trades, there must be a way to verify that this computation indeed occurred and that the output wasn’t forged or corrupted. For this, we need a performance layer.

Currently, three approaches exist:

  1. Trusted Execution Environments (TEEs): The first relies on hardware proofs via platforms like AWS Nitro or Intel SGX. In this mode, the agent runs inside a secure enclave that issues cryptographic attestations confirming specific code executed on specific data without tampering. Overhead is modest (around 5-10% latency), acceptable for high-integrity enterprise and financial use cases.
  2. Zero-Knowledge Machine Learning (ZKML): The second approach is mathematical. ZKML enables agents to generate cryptographic proofs that their outputs derive from a particular model, without revealing the model weights or private inputs. Recent demonstrations like DeepProve-1 by Lagrange Labs show GPT-2 inference in full zero-knowledge, 54-158 times faster than previous methods.
  3. Restake Security: The third relies on economic incentives rather than computation. Protocols like EigenLayer introduce stake-based security, where validators back an agent’s output with collateral. If the output is challenged and proven false, the stake is slashed. This system doesn’t prove each computation’s correctness but makes dishonest behavior economically irrational.

These mechanisms address the same core problem from different angles. But proof of execution is often episodic. They verify individual tasks, but markets need cumulative reputation.

Reputation transforms isolated proofs into a long-term performance history. Emerging systems aim to make agent performance portable and cryptographically anchored, rather than relying on platform-specific ratings or opaque internal dashboards.

Ethereum Attestation Service (EAS) allows users or services to publish signed, on-chain attestations about agent behavior. Successful task completion, accurate predictions, or compliant transactions can be recorded immutably and carried across applications.

image.png

@EAS

Competitive benchmarking environments are also emerging. Agent Arenas evaluate agents on standardized tasks, using Elo or similar scoring systems. Recall Network reports over 110,000 participants generating 5.88 million predictions, creating measurable performance data. As these systems expand, they resemble real rating markets for AI agents.

image.png

This enables reputation to be portable across platforms.

In traditional finance, agencies like Moody’s rate bonds to signal creditworthiness. The agent economy will need an equivalent layer to rate non-human actors. Markets will evaluate whether an agent is sufficiently reliable to entrust with capital, whether its outputs are statistically consistent, and whether its behavior remains stable over time.

Conclusion

As agents gain real authority, markets will require a clear way to assess their reliability. Agents will carry portable performance records based on verified execution and benchmarking, with scores adjusted for quality and traceable to explicit authorizations. Insurers, merchants, and compliance systems will rely on this data to decide which agents can access capital, data, or regulated workflows.

In sum, these layers are beginning to form the infrastructure of the agent economy:

  1. Discoverability: Agents must be able to find services in a machine-readable way, or they cannot identify opportunities.
  2. Identity: Agents must prove who they are and who authorized them, or they cannot participate.
  3. Reputation: Agents must establish verifiable records demonstrating trustworthiness, earning ongoing economic trust.
TOKEN-3.57%
ETH-2.52%
EIGEN-3.08%
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
0/400
No comments
  • Pin