The Death of the Job Description
Someone told me recently that they were looking for a candidate with 1 year of experience building and deploying AI agents in enterprises.
I asked them to define what an agent was.
The conversation became uncomfortable quickly.
This is the current state of AI-related hiring. Job descriptions written around terminology that the hiring manager cannot define, for capabilities that have existed for months rather than years, for roles that nobody has held because the function was too new.
Five years of experience in the agentic enterprise is a fiction. Agentic workflows, in any meaningful enterprise-deployment sense, are roughly two years old, at a generous count. The people writing these requirements are either unaware of the timeline, copying language from other equally confused job descriptions, or attempting to filter for something they cannot name precisely.
The Credential Trap
The job description is fundamentally a proxy for credentials. When a hiring manager cannot directly assess capability, they specify credentials that they believe correlate with it. Years of experience. Specific tool familiarity. Degrees from certain programs.
AI has broken most of the proxies simultaneously.
The tools changed so fast that experience with last year’s tools is of questionable relevance to this year’s deployment. Academic programs have lagged behind what is being deployed in production. The people who actually know how to deploy AI agents effectively largely learned on the job in the last eighteen months, often without the formal credentials the job descriptions now require.
The result: the most capable people frequently fail to match the job description, and those who do often have credentials that describe the vocabulary of AI but lack substance.
The Resume Is Now a Fiction
Here is a problem that is accelerating faster than anyone is discussing openly.
AI generates resumes. AI scans resumes. At some point in many hiring processes, an AI-generated document is evaluated by an AI-powered applicant tracking system, with a human making a decision based on what both AIs surface.
The candidate may be excellent. The resume may bear only a passing resemblance to their actual capabilities. The ATS may have filtered out genuinely strong candidates because their AI-generated resume used slightly different vocabulary than the job description’s AI-generated requirements.
Both sides of the transaction are optimizing for the artifact rather than the outcome. Resumes are being written to pass AI screening. Job descriptions are being written to sound like other job descriptions. The human capability behind both documents is increasingly obscured by the process designed to surface it.
The deeper question is whether the resume format has any remaining utility in a world where AI generates, screens, and ranks them without any reliable signal making it through to the actual hiring decision. The resume was always an imperfect proxy for capability. It has become an imperfect proxy for AI prompt quality.
What Skills Are Actually Needed
Here is the honest version of the skill set required for most AI-related roles, stripped of the credential theater.
The ability to ask precise questions. AI systems amplify the quality of the questions they are asked. Someone who can frame a problem clearly and identify what a useful answer looks like is more effective with AI tools than someone who knows the vocabulary but thinks imprecisely.
Judgment about outputs. AI systems produce plausible outputs. Accuracy is a separate question. The scarce and valuable skill is the ability to evaluate AI output critically and recognize when it is wrong. This is domain expertise applied to AI-generated content, and no certification produces it.
Comfort with rapid change. The AI landscape is changing faster than any credential framework can keep up with. The disposition to keep learning, to be wrong about tools that get superseded, and to develop judgment about new capabilities as they emerge is more durable than any specific technical skill that exists today.
None of these maps cleanly onto a job description built around years of experience with specific tools.
The Middle Management Problem
Middle management exists largely to coordinate information flow, supervise task execution, track progress, and escalate decisions that need more authority than the executing layer has. These are real functions. They are also structurally vulnerable to agent augmentation.
An agent that can monitor task completion, synthesize status from multiple workflows, flag exceptions for human attention, and route decisions to appropriate owners is performing a substantial fraction of what a middle manager spends their day on. The human judgment that remains in that role: the coaching, the contextual reading of organizational dynamics. It is real and important. The question is whether it requires as many people as the current structure assumes.
Organizations designing teams around AI augmentation will probably find that a leaner management layer, with agents handling the coordination and tracking work, produces better outcomes than the current structure. The job descriptions that need rewriting most urgently are those that mix genuine human judgment work with coordination overhead that agents will absorb.
The Entry Level Equation
A college graduate with an AI co-worker can outperform a middle manager who has adapted slowly.
The entry-level hire today spends a significant fraction of their time on work that AI tools perform well: research, synthesis, formatting, first-draft generation, and data organization. When those tasks shift to AI, the entry-level hire spends more time on judgment, client interaction, and novel problem-solving than they would have two years ago.
The middle manager who built their value on being better at those same tasks is now in a different competitive position. The efficiency gap that previously justified the compensation differential has narrowed. The entry-level hire with AI augmentation is performing the functional equivalent of what the middle manager used to do, at a fraction of the cost.
Experience still develops judgment, and judgment remains irreplaceable. The argument is that the tasks justifying the experience premium are the tasks that are getting automated first.
Rewriting the Job Description
An honest AI-era job description looks different from what is currently being published.
It specifies outcomes rather than credentials. The role is responsible for X. Success looks like Y. The tools that exist today to accomplish this include Z, but those tools will change, and the expectation is that the person holding this role learns with them.
It is honest about what agents handle. A significant fraction of this role involves overseeing and directing AI-generated work. The human contribution is judgment, quality evaluation, and decisions that require contextual reasoning, which AI cannot replicate.
It avoids impossible experience requirements. If the capability has existed for eighteen months, the description avoids requiring three years of experience in it.
And the resume submitted with it probably should be replaced with something that actually demonstrates what the candidate can do with the tools available today. A portfolio. A problem solved. A demonstration of judgment applied to a real situation.
AI is very good at making things look qualified. What organizations need to hire is the capability underneath the appearance.
If you find this content valuable, please share it with your network.
Follow me for daily insights.
Book me to speak at your next event.
Start managing your agents for free.
Chris Hood is an AI strategist and author of the #1 Amazon Best Seller Infailible and Customer Transformation, and has been recognized as one of the Top 30 Global Gurus for Customer Experience. His latest book, Unmapping Customer Journeys, is available now!