Heteronomous AI. Why Governance Is Both a Technical and Human Problem.
I’ve written before about the governance thresholds. The vocabulary that describes the relationship between a system and the laws that govern it.
Every AI system in production today is heteronomous. Without exception. A human built it. A human defined its purpose. A human authorized its deployment. A human is accountable for what they do. The governance comes from outside the system because the system cannot govern itself. That is the definition of heteronomy.
And this is part of the problem with AI governance.
The belief that these systems are somehow doing things independently. When responsibility is offloaded to the machine, organizations become confused about who is ultimately accountable. Or at least, their desire is to let the machine take care of itself.
Thus, my continued argument that the industry needs to course-correct its belief in self-governing AI agents (autonomy), simply because it is doing more harm to how we actually govern human-imposed systems.
This matters because it answers a question that keeps being framed as a choice when it isn’t.
Is AI governance a technical problem or a human problem?
It is both.
By definition. Heteronomous AI requires humans to set the law and technical infrastructure to enforce it. You cannot have one without the other and call what remains governance.
A post I read this week had this line:
“AI governance is not a technical problem. It is knowing who is responsible for what the machine decides.”
I don’t disagree with that sentence. But it’s not as simple.
Because the framing, while useful for a board audience, contains a subtle sleight of hand that the technical community has been allowing to go unchallenged for too long. It presents a choice between two things that are not actually in competition.
AI governance is a technical problem. And it requires humans to be responsible. Both of those are true. Simultaneously. Without contradiction.
Where the Framing Works
The board-level simplification serves a purpose. Boards are not in the business of evaluating cryptographic audit architectures. They are in the business of asking accountability questions. Do we know which data our AI systems use? Is there one person accountable for each AI output? Can we trace any answer back to its source?
These are the right questions. The framing that produces them is correct. When something goes wrong with an AI system, the first question any regulator, auditor, or plaintiff will ask is not “describe your runtime evaluation architecture.” It is “who is responsible for this?”
If that question doesn’t have a specific human answer, you have a governance failure regardless of how sophisticated your technical infrastructure is. The one-page version is right about that.
Where the Framing Falls Short
Knowing who is responsible doesn’t make them capable of fulfilling that responsibility without the right infrastructure underneath them.
I can name a person as responsible for every AI decision in my organization. That person cannot review four thousand agent actions per day with sufficient attention to make those reviews meaningful. That person cannot detect that an agent’s behavioral patterns are shifting over six weeks in ways that compound quietly into something consequential. That person cannot cryptographically verify that the audit trail they’re being shown hasn’t been retroactively altered. That person cannot halt an action mid-execution once a threshold is crossed.
The human is responsible. The technical problem is what enables them to fulfill that responsibility.
This is not an unusual arrangement. We assign human responsibility for all kinds of systems that require technical infrastructure to govern. The pilot is responsible for the flight. The avionics tell them what they need to know and when. The engineer is responsible for the bridge. The sensors detect the load conditions that human perception cannot. The CFO is responsible for the financial statements. The audit system produces the evidence chain that makes those statements verifiable.
In every case, human responsibility and technical infrastructure are not alternatives. One without the other produces either an unaccountable system or an uninformed human pretending to be accountable.
The Four Questions Behind the Responsibility
The post asks the board to verify four things: which data the AI uses, what every metric means, who owns each decision, what the AI can and cannot show, and whether answers can be traced to their source.
These are governance questions. They are also technical specifications. Let me take each one.
Which data does the AI use? That is an identity and scope question. It requires that every agent have a defined, enforced scope of authorized data access. Not a policy document that describes the intended scope. A technical enforcement mechanism that prevents the agent from accessing data outside its scope and produces a verifiable record when it tries. The human is responsible for setting the scope. The technical system enforces it at runtime.
A name next to every decision. That is an audit trail question. Names without cryptographic verification can be changed. A real audit trail is hash-chained, tamper-evident, and attributable to a specific verified identity at every step. The human is responsible for the decision. The technical system produces the evidence that makes that responsibility verifiable rather than asserted.
Rules for what the AI can and cannot do. That is a behavioral contract question. Rules require runtime enforcement, not just documentation. An agent with a documented scope but no technical enforcement of that scope is operating on the honor system. Humans are responsible for defining the rules. The technical system enforces them in the moment they matter.
Tracing any answer back to its source. That is a provenance question that requires a chain of custody from action to outcome to the human authorization that permitted it. That chain has to be built at execution time, not reconstructed afterward. Reconstruction from incomplete records is not provenance. It is educated guessing.
The Bidirectional Problem Nobody Puts on One-Page Summaries
There is a dimension of the human responsibility argument that one-page board summaries consistently omit because it is uncomfortable.
The human you named as responsible will drift.
Not through negligence or bad intent. Through the ordinary mechanics of human attention under volume and time pressure. The person reviewing AI decisions on day one brings careful judgment to each case. By month six, with three times the volume and half the time, they are approving items faster than they can genuinely evaluate them. The approval rate is the same. The quality of the review is not.
This is bidirectional drift. Agent behavioral drift gets significant attention in technical governance circles. Human reviewer drift almost never does. An agent that processes thousands of decisions and gradually shifts its behavioral patterns is a recognized risk. A human reviewer who processes thousands of decisions and gradually shifts their review patterns is an invisible risk that no board-level governance framework accounts for.
The practical consequence is that the human you named as responsible may be fulfilling that responsibility in name, while the actual oversight function has degraded. The name is there. The governance is not.
Monitoring for reviewer drift, watching for the patterns that indicate a human oversight function is becoming ceremonial rather than operational, is a technical capability. It requires tracking approval rates over time, review duration trends, consistency across similar case types, and patterns that diverge from established baselines. That is the same infrastructure you need to monitor agent drift. Applied to the humans doing the reviewing.
Nomotic monitors both. Because both drift. And because a governance system that only watches the agents while trusting the humans to remain consistently engaged is half a governance system.
The Answer Is Both
We started off with this statement: “AI governance is not a technical problem. It is knowing who is responsible for what the machine decides.”
Let me offer the amendment.
AI governance is a technical problem that requires humans to be responsible for its outcomes. The technical problem is what makes human responsibility possible to fulfill. Human responsibility is what makes the technical infrastructure accountable to something beyond its own configuration.
Nomotic solves technical problems and aligns humans with technology for audit trails. Your organization still needs to name the humans responsible for ensuring the system works, setting the policies, reviewing escalations, and answering when something goes wrong.
Neither half of that is optional. Neither half substitutes for the other.
The board should ask both questions. Who is responsible? And do they have the infrastructure to fulfill that responsibility in a way that produces verifiable evidence when you need it?
No matter how you look at it, the system is not responsible for itself.
If you find this content valuable, please share it with your network.
Follow me for daily insights.
Book me to speak at your next event.
Chris Hood is an AI strategist and author of the #1 Amazon Best Seller Infailible and Customer Transformation, and has been recognized as one of the Top 30 Global Gurus for Customer Experience. His latest book, Unmapping Customer Journeys, will be published in 2026.