Over-Engineering Automation for the Sake of AI
I watch a lot of AI demos. Most of them don’t need AI to function.
That is not a new observation. It’s been going on for a while and accelerating recently.
About four years ago, I had a consultation call with a company that described what they called a revolutionary new client onboarding experience. Fully AI-driven. The system would ask questions, allow the new client to respond, and dynamically reveal new questions based on previous answers. Available 24/7. Intelligent. Adaptive.
I sat through the whole explanation before responding.
Google Forms does this. Has done this for years. Conditional logic, branching questions, response-triggered workflows. Free. No model. No API costs. No latency from inference. No probabilistic outputs that might occasionally give a client a question completely unrelated to what they answered. Just a form, doing exactly what a form is supposed to do.
“But AI is a better way to do it,” I was told.
A better way to do what, exactly?
So You Want a Search Engine
I had another client who wanted to help customers find answers when they landed on their website. They described it as scanning the site, understanding the content, and surfacing relevant information based on what a customer was asking. I listened to the whole pitch before I said anything.
“So you want a search engine.”
There was a pause on the call. The kind of pause that happens when someone realizes they have spent three months planning a solution to a problem that was solved in 1998.
These are not edge cases. They are the rule. The hype around AI has created a kind of perceptual filter in which existing technologies become invisible, and every problem, regardless of its nature, is evaluated as a candidate for AI. The outcome is a market flooded with over-engineered products that use a large language model to do something a much simpler system would do better, faster, cheaper, and with considerably less risk of producing an answer that is confidently, fluently wrong.
You Probably Don’t Need AI For That
You do not need AI for a search result. You do not need AI for a conditional onboarding form. You do not need AI to sort a list, filter a dataset, send a notification, match a pattern, or execute a rule. These are solved problems. The solutions are reliable, deterministic, and available off the shelf at a fraction of the cost of building an AI-powered version.
And yet here we are.
Every week brings a new AI feature, a new AI tool, a new AI this-or-that. The underlying analysis of whether AI was actually the right technology for the job is rarely visible in the announcement. The question was apparently answered before the product was built. Of course, it uses AI. Everything uses AI. Why would you build it differently?
The Governance Version of the Same Problem
The same pattern has arrived in AI governance, which is the part I find most frustrating given how much time I spend in this space.
Organizations are building deterministic wrappers around LLMs, calling the result a governance solution, and presenting it as a meaningful control plane. I have written about this before, and I will say it again here. If you are wrapping an LLM with deterministic rules tight enough to restrict its probabilistic outputs to a predetermined set of acceptable responses, you have not governed the LLM. You have negated it. The LLM is doing nothing that a rules-based system without an LLM could not do more reliably and at lower cost.
And if you do not trust the LLM enough to let it operate within a reasonable range of outputs, the honest engineering question is why the LLM is in the system at all. Build it deterministically. It will be faster, cheaper, and auditable, and you will not need a governance layer for a system that was never probabilistic in the first place.
The governance problem disappears when you stop creating it unnecessarily.
This is not a criticism of deterministic systems or governance frameworks. Both are valuable and necessary. It is a criticism of adding complexity that generates its own downstream problems and then marketing the solution to those problems as innovation.
My Refrigerator Is Now an Agent
The agent washing has reached a level that requires its own paragraph.
Apparently, my kitchen refrigerator now has an AI agent that monitors its temperature. I do not remember a firmware update. Last year, it did not have this agent. What it had was a notification. The freezer beeped when the door was left open, and the temperature became unacceptable. A sensor, a threshold, an alert. Simple, reliable, functional.
This year, it is an agent.
I should not be surprised. Overnight, everything became an agent. Scheduled jobs became autonomous workflows. Conditional logic became intelligent decision systems. Notification triggers became AI-powered monitoring. The terminology changed. The underlying mechanisms largely did not. But the vocabulary of agency and intelligence got applied broadly to things that are neither agentic nor intelligent in any meaningful sense of either word.
This is agent washing. The same category error that led someone to describe a conditional form as an AI onboarding revolution now applies to the word agent. The refrigerator sensor is not an agent. The search engine repackaged with a chat interface is not an agent. The rules-based governance wrapper marketed as a control plane is not AI governance.
Awareness Is the Best I Can Offer
I do not have an answer to any of this. That is an honest statement, not false modesty. This article is mostly about awareness. An attempt to name a pattern that I think is worth naming.
Be aware of what you call your product. Be aware whether the architecture you chose was right for the problem, or if it was just because AI was available and the market rewards it. Be aware of what you are agent-washing in your marketing. Be aware of whether the governance problem you are solving exists because you made a technology choice that created it.
I would like to suggest a better approach. That engineering discipline, honest technology selection, and genuine problem-first thinking would produce better outcomes for everyone building in this space.
I am also confident that almost no one will listen. The incentives run in the other direction. The demos will keep coming. The agents will keep multiplying. The refrigerators will keep monitoring themselves.
At some point, the market will ask whether the complexity was worth it. That question will be uncomfortable for many people who built their products without asking it first.
If you find this content valuable, please share it with your network.
Follow me for daily insights.
Book me to speak at your next event.
Chris Hood is an AI strategist and author of the #1 Amazon Best Seller Infailible and Customer Transformation, and has been recognized as one of the Top 30 Global Gurus for Customer Experience. His latest book, Unmapping Customer Journeys, will be published in 2026.