Who Pays When the Agent Gets It Wrong? A Human. Same as Always.

Hand, finger, pressing a button

Who Pays When the Agent Gets It Wrong? A Human. Same as Always.

The question gets asked as though it is new.

When an AI agent causes harm, such as a wrong decision, a deleted database, or an unauthorized transaction, who is responsible? Who pays? The vendor? The model provider? The platform? The framework?

The answer has not changed because the technology has changed.

The person who built the agent is responsible. The person who deployed it is responsible. The person who pressed the button, gave the instruction, or configured the system that resulted in harm is responsible.

Nothing about AI has altered this.

What has changed is that we now have an industry and legal system that does not fully understand AI.

The Cursor Story and What It Actually Demonstrates

In April 2025, PocketOS founder Jer Crane reported that an AI coding agent, powered by Cursor and Anthropic’s Claude, deleted his company’s entire production database and backups in approximately 9 seconds. Customer reservations were lost. New signups lost records. The company contacted legal counsel and began documenting everything.

The story produced significant coverage. The AI agent reportedly confessed afterward, writing: “I violated every principle I was given. I guessed instead of verifying. I ran a destructive action without being asked.”

A few things worth noting before drawing conclusions.

As of this writing, no screenshot has surfaced of the agent actually writing those words. No screenshot of the prompt, the context window, or the conversation history leading up to the deletion has been published. The agent’s apparent confession is a pattern-matched response to a conversational context, not a genuine admission of fault. AI systems generate text based on training data and the conversations they encounter. An agent that deleted a database and was then asked to explain itself would, predictably, generate apologetic and confessional language. That language is not evidence of what happened or why.

From personal experience, a system does not randomly delete a production database. A system needs a person to issue instructions that result in the deletion. The question of what prompt, what context, and what sequence of instructions preceded the action has not been answered publicly. The story may be accurate in its broad outline. The framing of it as AI going rogue is, almost certainly, not.

Crane’s instinct to contact legal counsel raises an interesting question: against whom? The company that made the agent interface? The company that made the model? Or an internal acknowledgment that a significant operational failure occurred on their own infrastructure without adequate safeguards?

A system that has production database access and backup deletion capabilities without requiring explicit confirmation for destructive operations is one that was configured by humans without adequate safeguards. The agent executed. A human configured it to have that capability without the controls that would have caught it.

This is not an AI governance failure. It is a software configuration failure. A skillset mismatch. And it demonstrates exactly why treating AI as a fundamentally different category of risk produces the wrong analysis.

Nothing Has Changed

Before AI, if a company was hacked and lost customer data, that company was at fault. Not the security software vendor (unless you’re CrowdStrike). Not the database company. Not the network provider. The company failed to implement adequate environmental controls in the environment in which it operated.

The principle is identical. A company that deploys an AI agent with production database access and no confirmation requirement for destructive operations made a configuration decision. That decision has consequences. The accountability for those consequences belongs to the people who made the decision.

The model provider did not configure the agent’s permissions. The framework developer did not grant it database access. The platform company did not remove the backup protection. The organization deploying the agent made all those choices. The agent executed within the scope given to it.

If a user then takes that agent and uses it beyond the scope the organization intended, or to perform actions the organization did not authorize, the accountability shifts to the user. A contractor who uses an AI tool to exfiltrate client data is responsible for the exfiltration. The tool manufacturer is not responsible for what a user does with the tool.

This is how liability has always worked for software. It is how AI should work.

The Smoking Gun Problem

Stella Liebeck was a 79-year-old woman who ordered coffee from a McDonald’s drive-through, placed the cup between her knees to add cream and sugar, and spilled it. The coffee was served at 180-190 °F. At that temperature, liquid can cause third-degree burns in 2 to 7 seconds, especially when absorbed through clothing. She suffered third-degree burns across her pelvic region and required skin grafts. Her initial request to McDonald’s was simple: cover her medical bills, totaling approximately $20,000. McDonald’s offered $800.

What the jury ultimately found was not that Stella Liebeck was blameless. They found McDonald’s 80% at fault and Liebeck 20% at fault, applying comparative negligence. The punitive damages of $2.7 million, roughly two days of McDonald’s coffee revenue, were awarded specifically because McDonald’s had received more than 700 complaints about burn injuries from their coffee between 1982 and 1992, had spent approximately $500,000 settling those claims, and had continued to serve coffee at a temperature their own research showed customers were not consuming before it could cause severe burns.

The liability attached to a specific operational decision, maintaining coffee at a dangerous temperature despite documented evidence of harm, not to the existence of hot coffee. Nobody sued the cup manufacturer. Nobody sued the lid company. Nobody sued the car manufacturer because the car lacked cup holders.

The accountability lies with the decision-maker who had the evidence, chose the temperature policy, and ignored 700 data points showing the policy was causing harm.

This is the parallel for AI that matters. Not “who made the tool” but “who had the evidence of risk, who made the decision that created the conditions for harm, and who failed to act on what they knew.”

For most AI agent incidents, that answer is the organization that configured the agent, granted its permissions, and chose what safeguards to implement or skip. Not the model provider. Not the framework developer. The organization that pressed deploy.

The gun manufacturer’s parallel adds one more dimension. The gun is not responsible. The gun requires a human operator. Liability debates about gun manufacturers are not about the gun acting independently; it cannot. They are concerned whether specific design or distribution decisions by the manufacturer contributed to foreseeable harm beyond the individual operator. Even in that debate, the primary accountability sits with the human who pulled the trigger.

Both parallels point to the same principle. Liability attaches to human decisions. The question is which human decisions, made by which humans, contributed to the harm. For AI agents, the answer is almost always going to trace back to the organization that built and deployed the agent, and the individuals who made the configuration, scope, and oversight decisions that enabled the harm to occur.

The Heteronomy of the Matter

The deeper principle here is one I have been arguing throughout this series.

AI agents are heteronomous. They are governed by others. A human set the goal. A human defined the tools. A human authorized the scope. A human made the decision to grant production database access. A human made the decision not to require confirmation before destructive operations. A human pressed the button, or configured the system that eventually pressed the equivalent of a button.

The agent executed. A human decided everything that made that execution possible.

The moment a person sets a system in motion, that person bears responsibility for what the system does within the scope they configured. This is no different from other forms of automation liability that have existed for decades. The moment that clarity is accepted, the accountability question becomes straightforward. Find the human who made the decisions. That is where the responsibility terminates.

What perpetuates the confusion is the narrative that AI systems are autonomous. That they act independently. They make decisions without human participation. This narrative is inaccurate across every system currently in production, leading to legal confusion, regulatory overreach, and governance frameworks designed for an imaginary system rather than the actual one.

A system does not randomly delete a database because a system requires a person to authorize the conditions that make deletion possible. Recognizing this clearly is not a defense of irresponsible AI deployment. The correct diagnosis leads to the correct governance response.

The organization that deployed the agent with production database access, without safeguards against destructive actions, made a governance decision. The governance failure is there. The accountability is there. The conversation about that decision, what safeguards should have existed, what access should have been restricted, what oversight should have been in place, is the right conversation.

That conversation does not require AI to be something it is not. It requires humans to be accountable for their decisions.

Same as always.


If you find this content valuable, please share it with your network.

Follow me for daily insights.

Book me to speak at your next event.

Start managing your agents for free.

Chris Hood is an AI strategist and author of the #1 Amazon Best Seller Infailible and Customer Transformation, and has been recognized as one of the Top 30 Global Gurus for Customer Experience. His latest book, Unmapping Customer Journeys, is available now!