The Race to 100 AI Control Planes
I’ve seen crowded markets before.
The website development gold rush. Then came the app era, where every problem on earth apparently needed a dedicated icon on your home screen. I watched both markets balloon, consolidate, and shake out the vendors who sold the idea of a solution rather than the solution itself.
This morning, I opened LinkedIn and counted variations of the same announcement. Brand new. Never before seen. One of a kind. Patent pending. All unique approach to AI governance.
There are easily fifty of them today. By the end of the week, there will be more.
The Honest Part
Nomotic was, by most reasonable measures, considerably early to this space. I hold provisional patents on cryptographically signed agent identity, hash-chained audit trails, and a handful of other mechanisms that are now appearing in competitor announcements under fresh branding. Being first matters for IP. It matters considerably less for market penetration, which is a separate race with different rules.
But I’m not writing this to talk about timing or patents. I’m writing this because I’ve spent a year researching, and I think most of what’s being shipped right now doesn’t work the way the people shipping it believe it does.
That’s not a comfortable thing to say publicly. But it is true.
And the reason it’s true is the same reason I’ve been writing about for months. AI doesn’t work the way most people believe it does.
None of this is to discount the work being done. The opportunity is real, the market is real, and somebody in this space has a billion-dollar idea. That is worth pursuing.
The Hard Problem Nobody Is Advertising
Here’s what I’ll tell you that most solo vibe coders, as well as frontier AI companies in this space, won’t.
After a year of serious research into AI governance systems. After building and benchmarking, and iterating. The hardest version of this problem remains unsolved. By me. By anyone.
And no, you have not solved it either.
Wrapping a probabilistic system with a deterministic decision plane is not AI governance. It works some of the time. It produces auditable verdicts. It enforces defined boundaries. For certain use cases, it is exactly what is needed. But as a general solution for governing agentic AI, it has a fundamental problem: it negates the system’s probabilistic nature. And right now, the probabilistic nature of LLMs is the entire reason organizations are deploying them. You don’t reach for an LLM because you want deterministic outputs. You reach for it because you want something that can handle ambiguity, navigate novel contexts, and generate responses that weren’t explicitly programmed.
Deterministic governance over a probabilistic system is architecturally in tension with its own purpose.
Probabilistic governance over a probabilistic system is the honest answer. A system that evaluates likely compliance rather than rule-matching. That operates in the same statistical register as the system it governs. In my research, probabilistic governance reaches useful accuracy around 79% of the time. For some applications, 79% governance is acceptable. For a healthcare system managing medication decisions or a bank processing financial transactions, it is not. Those organizations are not comfortable with 98% accuracy, let alone 79%.
Hybrid governance, which combines deterministic rules with probabilistic evaluation, is where most serious implementations land. It also introduces latency. Every decision that escalates from the deterministic tier to the probabilistic tier adds time. In agentic systems that operate at machine speed, that latency compounds.
There is no clean solution here. Probabilistic governance has gaps. Deterministic governance negates the goal. Hybrid governance creates latency. Anyone claiming they have eliminated all three problems logged into Claude Code and asked it to build them a control plane.
Many organizations would be better served by simpler, more controlled automation that is genuinely governable. Instead, they are reaching for agentic frameworks and scrambling to build governance atop systems designed for flexibility rather than constraint.
The urgency is real. Unfortunately, the products being built to meet that demand are, in many cases, solutions to a portion of the problem.
What the Industry Is Actually Missing
The conversation about AI governance is too focused on control planes and not focused enough on standards.
We do not need one hundred different implementations of agent governance. We need agreement on how agents identify themselves, communicate their capabilities, and transfer context between systems. We need standards that make governance a native property of agentic systems rather than a layer applied from outside after deployment.
A few specific things I’ve been working on that point toward what this looks like.
The Agent Transfer Protocol (agtp://) is a proposal for an application-layer protocol where governance is a native component, not an afterthought. When agents communicate through ATP, governance semantics travel with the request. Not as metadata that a governance layer might inspect, but as a first-class element of how agents interact.
The .agent packaging standard defines a format for declaring what an agent is, what it does, its capabilities, and the governance context in which it operates. Without a standard for agent identity and capability declaration, every governance system must infer what it governs from behavior. Inference is not governance. The declaration is.
Intent-based API verbs are a work in progress on how agents communicate actions through systems in ways that carry governance-relevant semantics. The difference between an agent that calls a generic endpoint and one that declares its intent at the protocol level is the difference between a governance system that has to guess and one that has something to evaluate.
These are not products. They are standards work. And standards work is unglamorous. It doesn’t generate LinkedIn announcements. It doesn’t raise funding rounds. It takes years and produces documents that most people never read. But the history of every mature technology category suggests that governance problems at scale get solved by standards, not by competing implementations of the same pattern.
We don’t need a hundred control planes. We need agents that carry their governance context with them, communicate in semantics that governance systems can natively interpret, and operate within a shared framework of identity and accountability that any compliant system can verify.
That is the race worth running.
If you find this content valuable, please share it with your network.
Follow me for daily insights.
Book me to speak at your next event.
Chris Hood is an AI strategist and author of the #1 Amazon Best Seller Infailible and Customer Transformation, and has been recognized as one of the Top 30 Global Gurus for Customer Experience. His latest book, Unmapping Customer Journeys, will be published in 2026.