The Number One Question in AI Governance Already has an Answer
The most frequently asked question in AI governance, the one that appears in every panel discussion, every LinkedIn thread, every enterprise risk conversation, is some version of this: Who is accountable?
Who is accountable for what the agent did? Who is responsible for the agentic loop? Who owns the outcomes of multi-agent workflows? Who answers when something goes wrong?
The question keeps getting asked as though it is unsolved. It is not.
A human. Pick one. They are now responsible.
The reason this feels complicated is not that the answer is complicated. It is because we have collectively decided that AI is categorically different from every other piece of software an organization has ever deployed. And that decision is the problem with most AI governance implementations.
AI Is Software
To be clear, I’m not talking about human-in-the-loop.
An employee downloads confidential documents and shares them on social media. Who is accountable?
Not the social media platform. Not the upload mechanism. Not the download tool. The employee who made the decision. Possibly the manager who granted access to the material. Certainly, the organization that failed to enforce its own data policies.
Nobody in that scenario asks whether the software is responsible. The software executed the instructions it was given. The humans who made decisions along the chain of custody are the accountable parties. This is not a novel concept. It is the foundation of how organizations have managed software, data, and employee behavior for decades.
Now replace “confidential documents shared on social media” with “proprietary data entered into an AI tool.” The accountability structure is identical. A human accessed a system. A human made a decision to use that system in a particular way. A human bears responsibility for that decision. The fact that the system is powered by a language model does not change the accountability chain.
AI is software. The accountability question has the same answer it has always had for software. Humans are responsible. Specifically, the humans who built it, deployed it, authorized it, operated it, and failed to govern it.
The Policies Already Exist
Most organizations have email policies. You get a job, you get a company email. There is usually a naming convention. There are rules about what that email can be used for. There are policies regarding the use of personal email accounts for company business. These policies exist because organizations correctly understood that communication tools create accountability trails and that those trails need to be managed.
What would happen if an employee decided to conduct a sensitive negotiation through a personal Gmail account? Someone in that organization would have a conversation about a policy violation. The accountability structure would engage.
Now ask why that same organization treats an employee creating a personal Claude account and feeding proprietary data into it as a different category of problem. It is not a different category. It is the same problem. An employee is using an unsanctioned tool to conduct company business. The existing policy framework covers it. The specific tool, being a language model rather than an email client, is operationally irrelevant.
Software installation policies. Data classification policies. Acceptable use policies. Network access controls. Most organizations already have versions of all of these. You can block OpenAI from your internal network in an afternoon. The technical mechanism is trivially straightforward. The reason most organizations haven’t done it is not that the capability doesn’t exist. It is that AI has been mentally placed in a different category, one where the normal software governance reflex somehow doesn’t apply.
That mental categorization is the governance failure. Not the absence of an AI-specific framework.
Software Procurement
Take Workday. When a company decides to bring in an HR platform, a predictable sequence begins. RFP. Vendor competition. Security review. Legal. Finance. IT assessment. And somewhere in that process, someone says it. I sat in dozens of these conversations at Google. “We need this software, but we don’t know who in our organization is going to manage it.”
By the time contracts were signed, that question had an answer. Someone was promoted. Someone was handed the keys. A named human became accountable for the platform, its configuration, its data, and its outcomes.
That process exists because organizations learned that software without an owner creates problems.
Now watch how AI tools enter organizations.
An engineer finds a useful API and adds it to a project. A product manager creates an account on a new platform. A team starts using a consumer AI tool because nobody told them not to. Within weeks, proprietary data is moving through systems that no procurement process touched, no security team reviewed, and no named person owns.
The same organization that spent six months onboarding Workday allowed an AI tool with production access to its customer data to enter through a browser tab and a credit card.
The procurement discipline that protects organizations from ungoverned software was built for exactly this situation. It is simply not being applied.
The Autonomy Objection
The most common pushback to this argument is about agents. “Agents do things autonomously.” The human pressed a button, but after that, the agent acted on its own. How can the human be accountable for decisions the system made independently?
In the same way, a manager is accountable for decisions made by a team member they hired, trained, and authorized to operate. The same way, a company is accountable for decisions made by a contractor it retained and granted access to its systems. Delegation is not an abdication of accountability. It is an extension of it.
An agent is not autonomous. I have made this argument many times.
A human defined the agent’s goal. A human selected its tools. A human authorized its deployment. A human provisioned its access. A human pressed a go button. Every decision the agent makes is downstream of a human decision that enabled it. When something goes wrong, the accountability traces back up that chain to the human decision points. That is not a philosophical argument. It is how liability works in every other domain involving delegated authority.
There are no agents on the planet that miraculously appeared out of thin air and began doing things you didn’t want them to do in your company.
We probably solve most of the problems with AI governance if we address this one mindset.
A Practical Suggestion
If the accountability question is creating genuine organizational confusion, the answer is not a new governance framework. It is a job description.
Hire a VP of Agent Management. Give them a team of agentic engineers. Make them responsible for every agent operating within the organization. What tools are deployed? What access do they have? What behavioral contracts govern them? What the audit trail looks like. Who reviews incidents?
This is not revolutionary. It is the same structure organizations built around data, around security, around cloud infrastructure. A dedicated function with named accountable humans and a defined operational scope.
It is one of the reasons Nomotic Agent Identity includes an accountable human attached to it.
Add AI tools to your software policies. Include them in your acceptable use documentation. Require the same approval process for deploying an AI agent that you require for deploying any other piece of software with production access. Review that access on the same cadence you review everything else.
None of this requires a new governance category. It requires applying the governance categories you already have to a new class of tools.
The question of who is accountable for AI is not unanswered. It has been answered every time an organization establishes that employees are responsible for how they use the tools their organization provides, that managers are responsible for the decisions made within their teams, and that organizations are responsible for the systems they deploy.
The answer is the same now. We are just asking the question as though AI has changed something fundamental about human accountability.
It hasn’t.
If you find this content valuable, please share it with your network.
Follow me for daily insights.
Book me to speak at your next event.
Chris Hood is an AI strategist and author of the #1 Amazon Best Seller Infailible and Customer Transformation, and has been recognized as one of the Top 30 Global Gurus for Customer Experience. His latest book, Unmapping Customer Journeys, will be published in 2026.