Have You Built an AI Agent?

Have you built an AI agent?

Have You Built an AI Agent?

Last week, I wrote that there are no real AI agents in production today. That what we call agents, more accurately are nothing more than highly capable automation scripts and chained API calls. Advanced macros that have a great marketing team behind them.

I’ll add, not only is the definition of agents another nebulous thing, but the distinction between what people are sold as agents, and what the future of agents might look like, is drastically different than where we are today.

Which brings me to the question I actually want answered.

Have you built an AI agent?

What makes it an agent? Because it uses AI to do something?

Could you have achieved the same thing without AI?

Not rhetorically. I’m genuinely asking. And I’m asking because the answer, across enough people, tells us something important about where this industry actually is versus where it believes it is. It also tells me something useful about how to build better tools for the people building these systems.

So here’s the quiz. Answer honestly. Honesty is the point.

The Questions

  1. Have you built something you would call an AI agent?

Yes or no. Don’t overthink the definition yet. Just your honest instinct. Do you consider yourself someone who has built an AI agent?

  1. How did you build it?

Pick the one that fits best.

A) I used a no-code or low-code platform (Make, Zapier, n8n, Dify, Flowise, similar)
B) I used an agent framework and wrote code (LangChain, LangGraph, CrewAI, AutoGen, similar)
C) I built it from scratch without a framework
D) I used an AI tool to generate most of the code (Claude Code, Cursor, Copilot, similar)
E) Some combination of the above

  1. Did you use AI to build your agent?

There’s a certain irony in using AI to build an AI agent. Nothing wrong with it. But worth naming. Did you use an LLM to write the code, design the architecture, debug the logic, or generate the prompts that drive your agent’s behavior?

  1. Does your agent use an LLM?

At its core, when your agent makes a decision or generates a response, is there a language model in that path? Or is the decision logic deterministic code that you wrote or generated?

This question matters more than it seems. An agent that uses an LLM for its reasoning is probabilistic by nature. An agent that uses deterministic logic is predictable by design. Most people don’t think about which one they built until something unexpected happens.

  1. Where does your agent live?

A) It runs inside the platform I built it on, and I don’t control the infrastructure
B) It runs on my company’s infrastructure
C) It runs on cloud infrastructure I control
D) Honestly, I’m not entirely sure

This one is less about judgment and more about understanding. Many people who have built agents on third-party platforms have significantly less control over them than they realize. The agent runs on the same platform where the platform runs. If the platform goes down, changes its pricing, or gets acquired, what happens to your agent?

  1. Do you think anyone can build an AI agent today?

A) Yes, the tools have made it accessible to almost anyone
B) Yes, but you need some technical background
C) No, you still need real engineering skills
D) I’m not sure what the bar actually is anymore

This is the question I find most interesting. The democratization narrative is strong. No-code platforms promise agents without code. AI code generation promises agents without expertise. But there’s a difference between building something that runs and building something you understand, control, and can govern when it goes sideways.

  1. What does your agent actually do?

Don’t describe the vision. Describe what it does today. Specifically. Does it answer questions from a knowledge base? Does it route customer requests? Does it write code? Does it make bookings? Does it call external APIs? Does it make decisions that affect real systems?

And the follow-up: does it do that reliably, every time, or does it sometimes produce outputs that surprise you?

Why I’m Asking

The conversation about AI agents is dominated by two groups.

The first group believes the hype. Agents are autonomous. Agents are transformative. Anyone can build one. The future is agentic. This group tends to conflate capability with the impressive demo and mistake the demo for production reality.

The second group dismisses the category entirely. LLMs are just autocomplete. Agents are just scripts. Nothing new is actually happening. This group tends to miss the genuine engineering progress being made, even if the marketing around it is overinflated.

The truth, as usual, is more specific and more interesting than either camp.

I wrote that what we have are advanced macros. I stand by that. But advanced macros are not trivial. The gap between what automation could do five years ago and what it can do now is significant. The question is whether we have the vocabulary to accurately describe that gap, and whether the governance, security, and accountability frameworks we’re building are designed for what we actually built or for what we imagined.

That’s what these questions are trying to surface.

If you built your agent on a platform you don’t control, using AI to generate code you don’t fully understand, running an LLM whose decisions you can’t fully predict, doing things in production that affect real systems, without verifiable identity or behavioral contracts or audit trails, then you haven’t just built an agent. You’ve built a governance problem.

That’s not a criticism. It’s a description of where most of us are. Including people who have been doing this seriously for a long time.

Share Your Answers

Drop your answers in the comments. I’m reading everything. What you’ve built, how you built it, where it lives, what it does, and whether you’d still call it an agent after thinking through these questions.

The more honest the answers, the more useful this becomes. For the conversation. For the field. And for building tools that actually meet builders where they are.


If you find this content valuable, please share it with your network.

Follow me for daily insights.

Book me to speak at your next event.

Chris Hood is an AI strategist and author of the #1 Amazon Best Seller Infailible and Customer Transformation, and has been recognized as one of the Top 30 Global Gurus for Customer Experience. His latest book, Unmapping Customer Journeys, will be published in 2026.