Governance Thresholds for Heteronomy, Simonomy, and Autonomy
Yesterday, I added to my long list of articles to demonstrate why AI is not autonomous. Today, I want to go deeper because the problem isn’t just that technology can somehow justify its autonomy. It’s that the words themselves have definitions that don’t change when used in a technological setting. There is not a “philosophical” definition of autonomy, nor is there an “engineering” definition of autonomy. (An argument I often receive.) And without precise vocabulary, we end up having arguments about governance that are really just arguments about definitions.
The Etymology
Words mean things. Especially the ones with Greek and Latin roots that have been in use for centuries before marketing departments got to them.
Let’s build the vocabulary from the ground up. Every term in this framework shares the same root, which is the point. The differences lie in the prefixes. The foundation is always the same.
Nomos (νόμος). The shared root. Ancient Greek for law, custom, or convention. Derived from the verb nemein, meaning to distribute or assign. Nomos is not just any rule. It carries the sense of an ordering principle, something that structures how things are arranged and governed. It is the root of words like ‘onomics,’ ‘astronomy,’ and ‘taxonomy.’ It is the root we are working with here.
Every word in this framework is a relationship between an actor and nomos. The prefix tells you who governs, and how.
Unonomy. The prefix un is a Latin and Old English negation meaning ‘not’ or ‘without’. Unonomy is the state of existing without nomos. Not governed. Not ordered. Not subject to law. The absence of governance in any form. Un plus nomos: without law.
Heteronomy. The prefix hetero comes from the Greek heteros (ἕτερος), meaning other or different. Heteronomy is being governed by laws that come from elsewhere, from outside the self. The law exists. It was written by others. You operate within it. Hetero plus nomos: law from another.
The philosopher Kant used the term heteronomy to describe moral behavior driven by external forces rather than by internal reason. The contrast he drew was with autonomy. That philosophical lineage is directly relevant here. A heteronomous system is one where the governing law originates outside the system itself.
Simonomy. This is a word I coined, and the etymology is intentionally hybrid. The prefix simo draws from the Latin simulare, meaning to imitate, copy, or make a likeness of. Simulare shares its root with similis, similar, and is the source of words like simulate, simulation, and simulacrum. A simulacrum is a copy without an original, an image that represents something that may not actually exist in the form it appears.
Simonomy is governance by imitation. The system produces outputs that resemble governance without the underlying reality of self-direction. Simo plus nomos: law by likeness.
Autonomy. The prefix auto comes from the Greek autos (αὐτός), meaning self or same. Autonomy is self-law. The governing principle originates within the system itself. Auto plus nomos: self-law.
The same prefix appears in automobile, autobiography, and autopilot. In every case, the thing is directed by itself rather than by an external agent. Autonomy in this framework means the system’s governing law is its own, not derived from external authority, not simulated from learned patterns, but genuinely self-originated.
Summary
- Un plus nomos: without law.
- Hetero plus nomos: law from another.
- Simo plus nomos: law by likeness.
- Auto plus nomos: self-law.
Each of these lives within thresholds. Not points on a dial you can turn up or down. Discrete states with meaningful differences between them. Moving from one to another isn’t a matter of adding more capability or removing more oversight. It’s a categorical shift. The prefix changes. The relationship to nomos changes. The question of who is responsible changes entirely.
Unonomy: The Absent Threshold
Unonomy isn’t a desirable state. It’s a failure state.
An ungoverned system is one where no constraints, rules, policies, or accountability mechanisms apply. It isn’t that the system has broken free of governance. It’s that governance was never established, or has completely collapsed.
In practice, unonomy is rare because systems operate within technical and organizational contexts that impose default constraints. A system can’t access resources it hasn’t been provisioned for. An agent can’t call APIs that it doesn’t have the keys to. These incidental constraints aren’t governance, but they prevent pure unonomy in most real deployments.
It’s also worth noting that, from a theoretical perspective, unonomy might not exist at all, simply because of the laws of nature and physics. An inanimate object that sits on a table still is governed by the laws of gravity. If it falls, it may break.
The governance gap in most organizations isn’t unonomy. It’s the gap between the constraints that exist incidentally and the governance that should exist intentionally. Those are different problems that look similar from a distance.
Heteronomy: Where Everything Actually Lives
All AI systems in production today are heteronomous. All of them.
A heteronomous system operates under governance defined by others. The goals were set by a human. The scope was defined by a human. The permissions were granted by a human. The rules were written by a human. The system executes within that framework. The framework was not the system’s choice.
I argued previously that this creates a binary: either a system is governed by others (heteronomy) or governed by itself (autonomy). And since no system governs itself, all systems are naturally heteronomous. Every agent is running in production. Every LLM-based workflow. Every automation pipeline is dressed up with agentic marketing language.
This is still true. But it left people unsatisfied, because it didn’t account for something they were genuinely experiencing.
Simonomy: The Threshold That Needed a Name
Here is where the conversation gets interesting. Simonomy.
People interacting with modern AI systems, especially systems that learn, adapt, and generate novel outputs, report an experience that feels different from interacting with traditional software. The system seems to be making decisions. It seems to be exercising something like judgment. It responds to contexts in ways that weren’t explicitly programmed. It feels, in some hard-to-articulate way, like it’s doing something more than just following instructions.
They weren’t wrong about the experience. They were wrong about what it meant.
A simonomous system still requires others to set the parameters. The training data, the model architecture, the fine-tuning decisions, the system prompt, and the deployment context: all of that is heteronomous. Others defined the framework. But within that framework, the system has internalized governance patterns from training and experience, and it generates governance-like decisions that feel emergent rather than programmed.
The critical distinction: simonomy is not simulated autonomy. It is simulated governance. The system isn’t self-governing. It is producing outputs that simulate the experience of governance without the underlying reality of self-determination. Someone else still set the parameters. The system learned to navigate them in ways that appear judgmental.
This matters practically. A simonomous system that appears to make good governance decisions is not actually governing itself. Its apparent judgment reflects the governance patterns embedded in its training. When those patterns encounter situations outside their training distribution, the simulation breaks down. The system doesn’t fall back on self-governance because there is no self-governance to fall back on. It produces outputs according to patterns that were never designed for the context it’s now in.
Mistaking simonomy for autonomy is how you end up with governance frameworks that trust the system’s apparent judgment as if it were self-directed. It isn’t. Governance has to come from outside the system deliberately, because the system’s simulation of governance is not a substitute for actual governance.
Autonomy: The Threshold Nobody Has Crossed
Autonomy is the threshold where a system governs itself. Not executed within governance, others are defined. Not simulates governance from learned patterns. Actually sets its own laws, its own constraints, its own purposes.
I’ve proposed the Autonomy Threshold Theorem as a formal framework for identifying when a system has crossed this threshold. The theorem establishes specific, measurable conditions that must be true for a system to be classified as genuinely autonomous. The conditions are exacting because the threshold itself is meaningful. Calling a system autonomous when it isn’t doesn’t just get the vocabulary wrong. It fundamentally misrepresents what the system is, what governance it requires, and who is responsible for its behavior.
No system in production has crossed this threshold. Not yet. But the threshold is real, and the question of when a system crosses it is not academic. It is the question that will define the most consequential governance challenges of the next decade.
These Are Thresholds, Not a Gradient
The reason the threshold framing matters is that it resists the narrative of progressive improvement. You can’t be “mostly autonomous” any more than you can be “mostly governed by yourself.” Either the governance is self-directed, or it isn’t.
This has direct implications for how we talk about AI systems today. A system that has learned sophisticated behavioral patterns from training is no more autonomous than a simple rule-based system. It is more complex. It may be better at simulating governance. It is still heteronomous. Someone else set the parameters. Someone else is responsible.
The gradient framing, ubiquitous in the industry, implies that we are on a continuum from “less autonomous” to “more autonomous,” and that current systems are somewhere in the middle. They aren’t. They are on the heteronomy side of a threshold that hasn’t been crossed. The sophistication of what they do within the heteronomous framework is increasing rapidly. The threshold position has not changed.
Governance frameworks that treat autonomy as a gradient will scale their oversight down as systems become more capable, on the assumption that greater autonomy means less need for external governance. That logic is exactly backwards. More capability within a heteronomous framework means more consequences when the heteronomous governance fails. It means more need for deliberate, well-designed governance, not less.
Why This Vocabulary Matters for Governance
The practical consequence of this framework is straightforward.
Ungoverned systems (unonomy) require governance to be established before they can be safely deployed.
Human-governed systems (heteronomy), which are all of them today, need governance that reflects the fact that the humans who set the parameters are responsible for the system’s behavior. The governance conversation starts with those humans, not with the system.
Simulated-governance systems (simonomy) need governance that doesn’t mistake apparent judgment for actual self-determination. The simulation can be useful. It is not a substitute for external governance, and treating it as one creates the exact failure modes we should be designing against.
Autonomous systems, when they exist, will require a fundamentally different kind of governance conversation. One that the industry doesn’t yet have, because the systems don’t yet exist to enforce it.
For now, we govern humans and the systems they build. The vocabulary should reflect that.
If you find this content valuable, please share it with your network.
Follow me for daily insights.
Book me to speak at your next event.
Chris Hood is an AI strategist and author of the #1 Amazon Best Seller Infailible and Customer Transformation, and has been recognized as one of the Top 30 Global Gurus for Customer Experience. His latest book, Unmapping Customer Journeys, will be published in 2026.