Agent Traffic is Exploding. It’s Time to get it off HTTP.
Cloudflare CEO Matthew Prince just predicted that AI bot traffic will exceed human traffic on the internet by 2027. His company powers approximately 20% of all websites and handles traffic for roughly 80% of sites using a reverse proxy. He is working from data, and the numbers are unambiguous.
The math is straightforward. A human shopping for a digital camera might visit five websites. An AI agent completing the same task could visit fifty, pulling product data, comparing reviews, cross-referencing pricing, and summarizing findings before returning a single answer. That is a 10x traffic multiplier from one query. Before generative AI, bots accounted for about 20% of total web traffic. Google’s crawler dominated that share, alongside a handful of legitimate indexers and the usual scammer noise.
That ceiling is about to disappear.
Companies building agent systems are already hitting this in production. A retail client’s product research agent was hitting competitor sites so frequently that it triggered rate limiting. The fix was manual request throttling, a workaround that agents at scale will outgrow quickly.
The underlying problem is infrastructure. The web was designed for humans, and agents are something else entirely.
HTTP Is the Wrong Protocol for Agent Traffic
When an AI agent makes a request today, it looks exactly like a human request at the transport layer. It travels over HTTP. It carries the same structural signature as a browser visit. Every router, proxy, gateway, and CDN it passes through treats it the same as if someone clicked a link.
HTTP was designed for stateless, resource-oriented, human-initiated traffic. Agent traffic is intent-driven, stateful across sequences of related requests, and operating under declared authority. The intent gets buried in the request body, invisible to every infrastructure component the traffic passes through.
The HTTP method registry is frozen for practical purposes. New methods require full IETF consensus and must be backward-compatible with every HTTP client, server, proxy, and middleware component in existence. More importantly, adding methods would leave the observability problem unsolved. Infrastructure components route and filter HTTP traffic based on methods and headers that are identical across agent and human requests. Distinguishing agent traffic requires a protocol-level signal, and HTTP carries none.
The result is that as agent traffic scales toward a majority share of all web activity, the infrastructure carrying it is operating in the dark. Analytics break. Rate limiting misaligns. Governance becomes a manual, application-layer afterthought rather than a structural property of the network.
Separation Is the Answer
The core argument for the Agent Transfer Protocol (AGTP) is not about who is sending the traffic. HTTP already carries enormous volumes of non-human traffic and handles it without issue. The argument is about what the traffic needs to carry and what the infrastructure needs to do with it.
HTTP was built to serve content. AGTP was built to govern action.
Those are fundamentally different infrastructure problems. An agent booking a flight, executing a transaction, or delegating authority to a peer system is not fetching a resource. It is acting, under authority, on behalf of a principal, with consequences that need to be attributable. HTTP can carry that traffic. It cannot make that traffic accountable. Accountability requires identity, and identity requires a protocol designed around it from the start.
AGTP runs alongside HTTP. The two coexist. The question was never whether HTTP could move agent requests from one place to another. It can. The question is whether the infrastructure can answer, with certainty, who authorized the action, what they were permitted to do, and what actually happened. On HTTP, that answer requires manual instrumentation layered on after the fact. On AGTP, it is built into every request.
Getting agent traffic onto a protocol designed for governance rather than content delivery is a foundational infrastructure decision. The organizations that make it before the traffic tipping point will operate with that certainty. The ones that delay will be retrofitting accountability into a web that has already moved on.
And Then There Is the Identity Problem
Separation solves the infrastructure problem. AGTP solves something else, too, something that sits underneath the volume question and makes the stakes significantly higher.
Right now, when an AI agent takes an action on the internet, two questions go unanswered at the infrastructure level: who is this agent, and who is accountable for what it does?
Agents operate pseudonymously inside HTTP. Identity, when it exists at all, is asserted at the application layer and fails to travel reliably across systems, handoffs, and delegation chains. There is no persistent, verifiable, cryptographically bound identity that follows an agent from request to request.
AGTP changes this through a complete Agent Identity structure, and generating Birth Certificates for agents.
An Agent Birth Certificate is a cryptographically signed identity document issued to an agent upon registration. It is the genesis record of that agent’s existence on the network. The canonical Agent-ID the agent carries in every subsequent request is derived from it. It works the way a social security number works for a person: issued once, permanently bound, never reissued. The agent carries its identity across every interaction, operates only within the scope for which it was registered, and can be verified at any point in the chain.
Every AGTP request carries an Agent-ID, a Principal-ID identifying the human or organization accountable for the agent’s actions, and an Authority-Scope declaration defining what the agent is permitted to do. These are mandatory headers present in every request. When an agent attempts an action outside its declared scope, the infrastructure returns a governance signal and logs it.
This is what accountability looks like at the infrastructure level. A verifiable record, built into the protocol, is present in every interaction.
Why This Cannot Wait
Prince’s prediction lands in 2027. That is the next product cycle for most organizations that build or deploy agent systems.
The urgency extends well beyond traffic volume. Unidentified, unaccountable agent traffic is a security problem, a governance problem, and an organizational liability problem, all compounding simultaneously.
An anonymous agent is an unauditable one. Consequential actions, scope violations, unauthorized transactions, and data exposures all require a traceable identity to reconstruct. With Agent Birth Certificates, that record exists at the infrastructure level from the first request forward.
An agent with a verifiable, cryptographically bound identity is also a closed attack surface. Agent impersonation, authority laundering, and delegation chain fraud become structurally difficult when every agent carries an identity that cannot be shed or forged. AGTP makes legitimate agents provably legitimate.
And governance built on identified, attributable agents is real governance. Authority scope can be enforced because it can be verified. Actions can be audited because they can be attributed. Accountability becomes possible because outcomes have a traceable origin.
The companies that build agent identity infrastructure before the tipping point will operate in a web where their agents are known, their authority is declared, their actions are attributable, and scope violations surface at the infrastructure layer before consequences compound.
If you find this content valuable, please share it with your network.
Follow me for daily insights.
Book me to speak at your next event.
Chris Hood is an AI strategist and author of the #1 Amazon Best Seller Infailible and Customer Transformation, and has been recognized as one of the Top 40 Global Gurus for Customer Experience. His latest book, Unmapping Customer Journeys, will be published in 2026.