AI Governance. This Week Confirmed Everything.
I started building Nomotic before most of the industry had decided what to call this problem.
Not because I had a crystal ball. Because I had spent enough time at the intersection of enterprise technology and AI to know that the question the industry was obsessing over, what can AI do for us, was only a part of the question.
Another part was the ridiculous belief that AI was autonomous. It sent me down a path to both prove it, and to establish that autonomy IS governance, not automation.
The autonomous belief missed the fundamental component of the system. The governance part of the equation was what was missing from all of the discussions.
It posed the next question. What “should” AI do?
This is why I built Nomotic.
By January of 2026, the market was flooded with people claiming to have an original, deterministic, execution-boundary, governance system for AI.
This week, two of the most significant players in the AI agent space announced major products. Anthropic launched Claude Managed Agents. LangChain launched Deep Agents Deploy. Both are genuinely impressive pieces of infrastructure. Both validate something I have been saying for the better part of a year.
But neither one touches governance.
What This Week Actually Means
I want to be fair to both announcements. Anthropic’s Managed Agents is a great addition to their AI library. The decision to separate session state from harness execution and build a meta-harness to accommodate models that don’t yet exist is serious systems thinking. I respect it.
LangChain’s memory lock-in argument is sharp and correct. When you bundle agent memory behind a closed proprietary API, that memory belongs to whoever runs the API. That is a real risk for any organization building on agents that learn over time. Their open-source, model-agnostic positioning is a legitimate counterpoint to the Anthropic approach.
But here is what both announcements have in common. They are both solving the execution problem. How do you deploy an agent reliably, at scale, with good infrastructure underneath it?
Nomotic does this too.
Nomotic allows you to package an agent, transport it to another server, host it, provide identity management, deploy it with full lifecycle tools, enable analytics for your agent, and use a proprietary governance engine that is 99.9% accurate.
Both platforms provide what they call a human-in-the-loop (we call that heteronomy). Both describe this as a set of guardrails that define what an agent can or cannot do (primarily security). An allowlist and a blocklist. That is access control with a friendlier name. It is not governance.
The Gap Nobody Announced This Week
When I survey the landscape of what was announced, what was not announced is more interesting.
Nobody announced the agent’s identity. The ability to issue a cryptographic birth certificate to every agent. An Ed25519-signed document that binds the agent to a human owner, a defined behavioral archetype, a governance zone, a specific governance configuration, and has that identity travel with the agent across its entire operational lifetime. Nobody announced that, and ours has a patent pending.
Nobody announced behavioral drift detection. The ability to monitor not just what an agent does on a given call, but whether the pattern of what it does over time is shifting in ways that indicate something is changing. Not an alert when a rule fires. A continuous assessment of behavioral health across every agent in the fleet. Nobody announced that, and our bidirectional drift detection has a patent pending.
Nobody announced a tamper-evident audit trail in any forensic sense. Logs are not audit trails. An audit trail is hash-chained, cryptographically verifiable, and produces evidence that would satisfy a regulator or a legal proceeding. The question “which agent, under which governance configuration, owned by which human, took this action, and can you prove the record hasn’t been altered” was not answered by either announcement.
Nobody announced runtime governance in the behavioral sense I have been describing. 20-dimensional evaluation of every action before it executes, incorporating trust history, behavioral trajectory, contextual factors, and organizational policy into a verdict that determines not just whether an action is permitted, but whether it should happen. Along with a proprietary AI model trained for making judgment-based decisions.
Those are the things Nomotic was built to provide. They were not features I added because they seemed interesting. They were the answers to the questions I kept finding that nobody else was asking.
What I’ve Learned About Building in Front of the Market
Building something before the market has decided it needs it is a particular kind of experience. You spend a lot of time explaining why the problem exists before you get to explain how you solve it. You watch people build things that address adjacent problems and call them solutions to your problem. You file patents on mechanisms that won’t be recognized as important for another year.
None of that was because I was certain the market would catch up. It was because the problems were real, regardless of whether the market had named them yet.
This week, the market named them. Not directly. By building the execution layer and leaving the governance layer empty, Anthropic and LangChain have made visible exactly the gap that Nomotic was built to fill. Every organization deploying on their platforms now has a governance question that neither platform answers. That question leads to Nomotic.
The Memory Argument and the Governance Argument Are the Same Argument
LangChain’s most powerful point against Anthropic is about memory ownership. When your agent learns from its interactions and that learning accumulates behind a closed API, you don’t own the asset you’ve been building. The memory belongs to whoever runs the infrastructure.
I want to extend that argument to governance.
Your audit trail is an asset. Your trust scores, the behavioral history that records how each agent has performed over time, are an asset. Your governance evaluations, your behavioral fingerprints, your incident records, all of it is organizational knowledge that describes how your AI systems have actually behaved, as opposed to how you intended them to behave.
If that governance data lives in a platform you don’t control, you don’t own it. You can’t verify it independently. You can’t present it as evidence to a regulator without relying on the platform’s word for its integrity. You can’t take it with you if you change infrastructure.
Nomotic’s hash-chained audit trail is governance data you own. Cryptographically verifiable without depending on any third party. Yours to export, audit, and present as evidence regardless of what happens to the platforms your agents run on.
The open vs. closed argument that LangChain is making about execution infrastructure applies just as directly to governance infrastructure. Your governance record should be as portable as your memory.
Where I Think This Goes
I have written about ungoverned systems becoming human-governed, about the simulated governance we see in today’s sophisticated AI systems, and about the genuine autonomy that hasn’t arrived yet.
What Anthropic and LangChain built this week is excellent infrastructure for heteronomous systems. Systems governed by others, in their case, are governed by a very well-engineered execution infrastructure that a human authorized and deployed.
The governance layer for those systems, the infrastructure that makes them accountable, auditable, behaviorally bounded, and continuously evaluated against the standards the organization actually intends, is what Nomotic provides on top of both.
I have been saying for months that we don’t need one hundred AI control planes. We need agent-based standards and a governance layer that works with whatever execution infrastructure organizations choose. This week, the execution infrastructure came into view.
The market just caught up to the problem. Nomotic has been building the solution.
Nomotic is the world’s first native agent management platform. Host, deploy, and govern your agents, on one platform.
If you find this content valuable, please share it with your network.
Follow me for daily insights.
Book me to speak at your next event.
Chris Hood is an AI strategist and author of the #1 Amazon Best Seller Infailible and Customer Transformation, and has been recognized as one of the Top 30 Global Gurus for Customer Experience. His latest book, Unmapping Customer Journeys, will be published in 2026.