Help Chris Hood rank among the world’s top CX leaders—vote now.

The Great Autonomy Retreat: How Pushback Reveals the Truth

AI Agents and human automation

The Great Autonomy Retreat: How Pushback Reveals the Truth

For the full philosophical argument, read The Autonomy Threshold.

I’ve been having a lot of conversations about AI autonomy lately. Online, at conferences, in comment threads. And I’ve noticed a pattern.

People use the word “autonomy” constantly. Autonomous agents. Autonomous systems. Autonomous AI. It’s everywhere across marketing materials, LinkedIn posts, technical documentation, and regulatory proposals.

But when I push back and ask what they actually mean by “autonomous,” something interesting happens.

They shift their perspective.

The Retreat

Here’s the kind of response I get after questioning whether AI agents are autonomous:

“Well, sure, these systems only operate within the parameters and constraints we give them. They’re not conscious or truly independent. When we say ‘autonomous,’ we really just mean they can take actions without needing a human to approve every single step.”

Read that again. The person is describing a system that operates within given parameters, isn’t truly independent, and doesn’t require approval at every step.

That’s not autonomy. That’s automation with delegation.

Here’s another typical response, from a discussion about agentic AI replacing workers:

“We’re not talking about autonomy in the philosophical sense. We just mean the system can access files, run commands, and decide what to do next without checking in with a human at each step. It’s about speed and efficiency. Humans are still in control of all the important decisions.”

Again: operating under instructions, deciding next steps within a defined scope, humans controlling the important decisions. That’s a script with conditional logic. That’s an automated workflow. That’s not autonomy.

And another:

“Obviously not autonomy the way philosophers mean it. In engineering, it’s just shorthand for a system that can plan out steps and execute them on its own.”

Engineering shorthand. There it is.

Article: 13 Arguments to Demonstrate why AI is Not Autonomous


One Word, Infinite Definitions

Autonomy has a meaning. It’s not obscure or contested. From Kant through contemporary philosophy of action, autonomy means self-governance, the capacity to determine your own laws, set your own ends, and evaluate your own success. An autonomous agent isn’t just one that acts without moment-to-moment supervision. It’s one that decides for itself what counts as success.

But in AI discourse, “autonomy” has become a container for whatever the speaker needs it to mean:

  • “It can execute multi-step tasks” → autonomy
  • “It doesn’t require constant human input” → autonomy
  • “It can select among options” → autonomy
  • “It runs in the background” → autonomy
  • “It’s faster than manual processes” → autonomy

None of these are autonomy. All of them are automated with varying degrees of sophistication.

The tell is in the qualifiers. When a single word needs constant modification to mean different things, something has gone wrong:

  • Bounded autonomy — autonomy that operates within boundaries set by someone else
  • Semi-autonomy — autonomy that’s only partial (partial self-governance?)
  • Limited autonomy — autonomy with limits imposed externally
  • Human-level autonomy — as opposed to what, dog-level autonomy?
  • Task autonomy — autonomy for specific tasks only
  • Operational autonomy — autonomy in operations but not in… what, exactly?

These phrases are trying to have it both ways. They want the rhetorical weight of “autonomy,” which implies agency, independence, and self-direction, while describing systems that are none of those things.

My favorite contradiction is one I encountered in a paper: “automated autonomy.” If you need to specify that your autonomy is automated, you’ve revealed that you’re not talking about autonomy at all. You’re talking about automation and borrowing a more impressive word.

How We Got Here

The corruption of “autonomy” didn’t happen overnight. It was a slow drift, driven by engineering convenience and marketing incentive.

It started in robotics. In the 1980s and 1990s, researchers needed a way to describe systems that could operate without continuous human teleoperation. A robot that could navigate without someone steering it via joystick was, in their terminology, “autonomous,” as opposed to “teleoperated.” This was a reasonable technical distinction. But operational independence isn’t autonomy. It’s automation that doesn’t require a hand on the controls.

The automotive industry scaled the confusion. When self-driving development accelerated, engineers borrowed the robotics vocabulary. SAE International formalized this in J3016, defining levels 0 through 5 as degrees of driving capability, but the industry universally called it “levels of autonomy.”

Marketing saw opportunity. “Automated” sounds like your dishwasher. “Autonomous” sounds like the future. As AI capabilities grew, companies reached for the more impressive word. The term became a marketing asset, detached from its meaning.

There’s a counter-movement. To their credit, SAE has begun correcting course. The 2021 revision of J3016 explicitly lists “autonomous” as a deprecated term, recommending “driving automation” instead. The standard now states that terms like “autonomous,” “self-driving,” and “robotic” are “used inconsistently and confusingly” and should be avoided. The correct terminology, per SAE, is “motor vehicles with automated driving systems,” not autonomous vehicles. (SAE J3016-2021 terminology guidance)

The problem is that the damage is done. The deprecated term is everywhere in regulations, marketing, headlines, and ordinary speech. SAE can update its standard, but the semantic rot has already spread from engineering jargon to legal text to public understanding.

Why the Slippage Happens

I don’t think most people are deliberately misleading. The slippage occurs for several reasons.

First, capability feels like autonomy. When a system performs an impressive task without human intervention, it feels autonomous. A self-driving car navigating traffic seems to be making its own decisions. An AI agent booking flights and sending emails seems to be acting independently. The phenomenology of watching these systems operate triggers our intuitions about agency, even though the architecture doesn’t support it.

Second, “automation” sounds boring. We’ve had automation for decades. Thermostats are automated. Assembly lines are automated. Email filters are automated. Calling your cutting-edge AI system “automated” puts it in the same category as a sprinkler timer. “Autonomous” sounds like the future. It implies a leap, not an increment.

Third, the distinction is genuinely subtle. The difference between a very sophisticated automated system and a genuinely autonomous one isn’t visible in behavior. Both can execute complex tasks, respond to novel situations, and operate without constant supervision. The difference is architectural, who sets the terminal objectives, who retains override authority, and that’s not something you can observe from the outside.

So people reach for “autonomy” because it captures the felt sense of what these systems do, even though it misdescribes what they are.

Why It Matters

You might think this is just semantics. Who cares what we call it, as long as we understand what it does?

But language shapes thought. And muddled language creates muddled thinking.

Accountability becomes unclear. If a system is “autonomous,” who’s responsible when it fails? The language invites us to think of the system as an agent making its own decisions, which diffuses responsibility away from the humans who designed, deployed, and operate it. But if we correctly understand the system as automated, responsibility stays where it belongs.

Regulation gets confused. Policymakers are writing rules for “autonomous systems” without clear definitions of what autonomy means. If the industry uses “autonomy” to mean “automation with extra steps,” and regulators use it to mean something closer to genuine self-governance, the resulting rules will be either too weak or too strong for the actual systems they govern.

Risk gets misframed. The AI safety conversation often focuses on the dangers of autonomous AI pursuing misaligned goals. That’s a real concern for systems. But if we’re calling everything “autonomous,” we lose the ability to distinguish between autonomy risks and the much more mundane (but still real) risks of poorly specified automation.

Trust gets misplaced. When users hear “autonomous,” they may assume the system is capable of judgment, discretion, and context-sensitivity that it doesn’t actually possess. They may over-rely on it, under-supervise it, or fail to catch errors because they’ve been told the system handles things autonomously.


The Honest Vocabulary

Here’s what I’d propose.

Automated: The system executes predefined or learned behaviors without moment-to-moment human input. This covers everything from thermostats to LLMs.

Agentic: The system can decompose goals into sub-tasks, select among actions, and execute multi-step processes. This is a useful term for current AI capabilities without implying self-governance.

Self-operating: The system can run without continuous human supervision. Descriptive, accurate, no metaphysical baggage.

Autonomous: The system determines its own terminal objectives and is architecturally immune to external override of those objectives. Reserved for systems that genuinely self-govern, if we ever build them.

None of these needs qualifiers. “Bounded automated system” is redundant; every automated system operates within bounds. “Semi-agentic” might mean something. But “semi-autonomous” is an oxymoron.

The Test

When someone tells you a system is autonomous, ask one question: “Who decides what counts as success?”

If the answer is “the system’s designers,” “the training process,” “the user’s prompt,” or “the company’s safety team,” it’s not autonomous. It’s automated. Possibly very sophisticated automation. But automation nonetheless.

The word “autonomy” isn’t a spectrum. It’s not engineering shorthand. It’s not a marketing term for “does stuff without asking.”

It means self-governance. And nothing we’ve built yet governs itself.


If you find this content valuable, please share it with your network.

Follow me for daily insights.

Schedule a free call to start your AI Transformation.

Book me to speak at your next event.

Chris Hood is an AI strategist and author of the #1 Amazon Best Seller “Infailible” and “Customer Transformation,” and has been recognized as one of the Top 40 Global Gurus for Customer Experience.

author avatar
Chris Hood

×