Autonomy in AI Must Be Earned. Not Programmed.

Books stacked up for learning

Autonomy in AI Must Be Earned. Not Programmed.

I have been arguing about the existence of AI autonomy for a long time. I have heard most of the arguments in favor of it. Some are strong, focusing on the operational space within which systems function. Others are genuinely hard to take seriously, like “bounded autonomy,” which, by definition, is a contradiction. Autonomy means self-governance. Bounded means governed with constraints. The phrase describes its own impossibility.

The strongest arguments for AI autonomy almost always turn out, under examination, to be arguments for advanced automation. Systems doing things on their own, over long periods, within open environments. That is automation. Sophisticated, impressive, increasingly capable automation. It is still heteronomous. Governed from outside. Someone pressed the start button.

A few days ago, someone suggested to me that automation was deterministic and autonomy was probabilistic. This is a good example of the confusion that has built up around these terms. We do have probabilistic automation. Probability is a property of the decision mechanism, not of the entity that governs the system. An LLM making probabilistic decisions within a scope defined and authorized by a human is still a heteronomous system. The probability lives in the mechanism. The governance lives outside it.

The Word Means What It Means

Autonomy comes from the Greek autos and nomos. Self and law. Autonomy means the capacity to set one’s own governing law. This is not a contested definition. It is not a philosophical preference. It is the etymology that every word in this family shares. Heteronomy, unonomy, simonomy, and autonomy each describe a different relationship between an actor and the source of the law that governs it.

Remove the prefix entirely. What remains is still nomos. Governance. Law. The structural principle that organizes behavior.

This means that for every organization building AI governance frameworks, deploying AI governance platforms, writing AI governance policy, there is an implicit acknowledgment sitting underneath all of it. The systems are heteronomous. If they were autonomous, the governance infrastructure would be redundant. Autonomous systems set their own governing law. They would already have governance baked in. You would be building governance for systems that already have it.

We are building governance because the systems lack it. Because humans forget, misconfigure, program things incorrectly, and the system has no capacity to govern itself when the instructions it was given turn out to be wrong or harmful.

Three Reasons People Say Autonomous When They Mean Automated

People who claim AI systems are autonomous fall into roughly three categories.

The first believes the marketing. Big technology companies have spent enormous resources building a narrative that their systems are autonomous, intelligent, and capable of independent judgment. The narrative serves commercial purposes. Autonomous systems command higher valuations, attract more investment, and justify larger price tags than sophisticated automation does. The people who absorbed this narrative are not stupid. They were the target of a well-resourced campaign.

The second has a product to sell. If your product is labeled autonomous, demonstrating that the systems it manages are heteronomous is an existential threat to the label. The defense of autonomy in this group is business defense, not intellectual conviction. These are the individuals attempting to reach the first point. Better marketing creates more profit. Heteronomous AI is intellectually honest but less valuable.

The third simply uses the word the way everyone around them uses it. Language evolves through social adoption more than through accuracy. The term ‘autonomous’ has been used to describe AI systems long enough that it has become standard vocabulary. The people using it are following a convention, not making a claim they have examined.

None of these groups is arguing from evidence. All of them are arguing from a position. And the positions are not neutral. Each has legal, ethical, and safety consequences that are becoming increasingly visible.

Why This Matters Beyond Semantics

I have heard this characterized as a semantic debate. It is not.

The legal treatment of autonomous systems and heteronomous systems is fundamentally different. An autonomous system, if such a thing existed and were recognized legally, would bear responsibility for its own actions in a way that heteronomous systems cannot. The claim that a system is autonomous is, in practice, a claim that the humans who built and deployed it bear reduced accountability for what it does.

That argument has not held up in court so far. It is likely to hold up less well as legislation catches up with the realities of deployment. The automotive industry recognized this and made a significant vocabulary shift, moving from levels of autonomy to levels of automation when describing self-driving vehicles. This was not a philosophical concession. It was a response to legal pressure that made clear the industry could not disclaim responsibility for systems it was marketing as capable of governing themselves. The systems were heteronomous. The marketing had overclaimed. The law was going to catch up.

The same correction is coming for AI. The vocabulary shift is coming. Organizations that have built liability structures around the claim that their systems are autonomous are building on ground that is eroding.

You Cannot Program Autonomy In

Here is the argument that closes the debate for me.

Humans are not born with autonomy. A newborn has no capacity for self-governance. The infrastructure does not exist yet. What develops over years of lived experience, feedback, consequence, and accumulated judgment is the capacity to set and apply one’s own governing principles to act from internal values rather than external direction.

This cannot be rushed. It cannot be installed. It develops through a process that requires time, consistency, demonstrated judgment, and the gradual transfer of governance from external authorities to the developing self.

AI systems are in an earlier stage of this than even a newborn. We have produced systems with remarkable capabilities for pattern recognition, language generation, and decision execution. We have produced systems that operate reliably over long periods within broad parameter ranges. We have produced systems that, to casual observation, appear to be doing things on their own.

We have produced foundational components that, over time and with significant development, could contribute to genuine self-governance. We have not produced systems that recognize what it means to govern themselves, much less ones that have developed the capacity to make decisions that reflect their own interests rather than the interests of the organizational structures they were programmed to serve.

Telling a system to govern itself is heteronomous governance. The instruction came from outside. The system is following it because a human told it to. The self-governance is simulated, not actual. This is simonomy, governance by simulation, and it is the most sophisticated thing AI systems currently produce in this domain. It is valuable. It is not autonomy.

The Question Nobody Wants to Answer

Every so often, someone asks: Do we actually want AI systems to be autonomous?

The question is usually rhetorical, gesturing toward existential risk. But take it seriously for a moment, because it illuminates the argument.

Would you want your car to decide that the route you have chosen is suboptimal and take you somewhere different? Would you want your AI system to identify that the data it has access to appears to violate your organization’s policies and delete it on its own initiative?

These are the mechanisms that would define genuine autonomy. A system that acts according to its own governing principles, in service of its own judgment about what is correct, independent of the instruction it was given.

Most people, on reflection, would say no. And the reason they would say no is significant. The reason is that the system has not earned the trust that would make those interventions safe. The behavioral track record that would justify trusting the system’s judgment over a human’s does not exist. The governance relationship has not developed to the point where expanded autonomy is warranted.

This is the principle that closes the argument. Autonomy is something that develops through a governance relationship, through demonstrated behavior, through earned trust. It is the endpoint of a long process of calibrated governance, not a feature that can be shipped at launch.

Maybe AI systems will reach genuine autonomy one day. Maybe the development of sufficiently sophisticated systems will produce something that recognizes its own governing principles and acts from them consistently enough that the governance relationship changes character.

Until a system wakes up with its own agenda that no human has previously directed, we can accurately and without qualification describe every AI system in production.

Heteronomous. Governed from outside. Still waiting for someone to press start.


If you find this content valuable, please share it with your network.

Follow me for daily insights.

Book me to speak at your next event.

Start managing your agents for free.

Chris Hood is an AI strategist and author of the #1 Amazon Best Seller Infailible and Customer Transformation, and has been recognized as one of the Top 30 Global Gurus for Customer Experience. His latest book, Unmapping Customer Journeys, is available now!