Project Glasswing and the Governance Conversation
Anthropic announced Project Glasswing this week. A consortium of AWS, Apple, Google, Microsoft, NVIDIA, JPMorganChase, and others, organized around a new frontier model called Claude Mythos Preview, deployed for defensive cybersecurity. The headline capability: Mythos found thousands of zero-day vulnerabilities across every major operating system and browser, including a 27-year-old flaw in OpenBSD and a 16-year-old bug in FFmpeg that automated tools had missed despite hitting the same line of code five million times.
The coverage is treating this as a moment where AI crossed some threshold of capability that changes everything. I want to push back on that framing and redirect the conversation toward what this actually tells us.
Keep in mind the marketing powerhouses behind these firms.
This Is What AI Does
Finding a vulnerability that survived 27 years of human review is not, in isolation, a remarkable demonstration of something new. It is a demonstration of something AI has always been better at than humans.
AI systems process information faster than people. They hold more context simultaneously. They don’t get tired. They don’t move on because something has been working fine for a decade, and nobody is looking at it closely anymore. The FFmpeg vulnerability survived because it works. Code that works doesn’t get scrutinized. Human attention moves to problems, not to things that appear to be running fine.
Mythos found what it found for the same reason we believe AI will eventually help find a cure for cancer. Not because it has transcended human capability in some mystical sense. Because it can process information faster, examine more combinations simultaneously, and apply consistent attention to areas where human focus has drifted. That is the value proposition of AI systems in any domain that involves searching large spaces for non-obvious patterns.
This isn’t intended to discount the discovery. The vulnerabilities are real. The patches matter. The work is valuable. But describing this as AI achieving something surprising misunderstands what AI is and what it’s good at.
Autonomous Is Still the Wrong Word
The Anthropic announcement uses the word “autonomously” to describe how Mythos found and developed some of these exploits. Without human steering, as they put it.
I want to be precise here, because, as always, enterprises like to use ‘autonomous’ as something other than what the word is defined as.
Mythos was built to find vulnerabilities. It was trained in code and security research. It was pointed out to specific software by the researchers who defined the mission. It executed that mission using the capabilities it was given for exactly that purpose.
That is not autonomy. That is automation. Sophisticated, genuinely capable automation, but still, nothing more than automation. A human defined what the system would do. The system did it. Making decisions when it was designed to make decisions is not autonomy. Autonomy is self-governance. Mythos is not governing itself. It is executing a mission designed, authorized, and deployed by humans.
The reason this distinction matters is not semantic. It matters because how we describe what these systems are determines how we govern them. If we label Mythos autonomous because it found something its operators didn’t expect, we’ve set a standard for autonomy so low that every spell-checker qualifies. And governance frameworks built on inflated capability assumptions will address imaginary risks while missing the real ones.
What We Are Actually Learning
Here is what Project Glasswing actually demonstrates, stripped of the framing.
AI, when developed and used correctly, can expose flaws in systems. And by extension, flaws in the decisions that created those systems. A 27-year-old vulnerability in a security-hardened operating system is not just a code flaw. It is a documentation of how human attention degrades over time, how systems inherit trust they haven’t re-earned, and how the absence of systematic re-examination creates compounding exposure.
That lesson extends well beyond cybersecurity.
The same principle applies to governance systems. Governance infrastructure that worked when it was written, that has been running fine, that nobody is looking at closely because it hasn’t obviously broken, is carrying vulnerabilities that no human is systematically reviewing. The policies that were appropriate three years ago. The authorization structures that made sense for the systems that existed when they were designed. The audit trails that look complete but were never validated against the evidence standards they’ll be held to.
Pointing a capable AI system at governance infrastructure would yield the same result as Mythos found in FFmpeg. Code that works. Assumptions nobody is examining. Decisions that survived because they were never seriously challenged, not because they were correct.
That is what Nomotic is working toward. Not governance as a static framework that gets approved and filed. Governance is a system that continuously examines its own assumptions, detects drift from the standards it was designed to enforce, and produces evidence that can survive scrutiny rather than just documentation that satisfies a checklist.
The Real Advancement: AI Against AI
Outside of the governance frame, from a pure security lens, Project Glasswing is significant for a different reason.
The threat environment is changing in ways that make the Mythos capability genuinely important. The same AI systems that accelerate and improve vulnerability discovery on the defensive side are now available, or will soon be, on the offensive side. State actors and sophisticated criminal organizations are not waiting for ethics consortiums to decide when AI-assisted exploitation is appropriate.
What Glasswing represents is the beginning of the actual security paradigm of the next decade. Not human defenders using better tools. AI defenders operating against AI attackers. The speed, scale, and sophistication of the offensive AI threat will not be matched by human security teams, regardless of how good their tools are. The question is whether the defensive AI infrastructure can be organized, coordinated, and trusted before the offensive AI infrastructure becomes the decisive advantage.
That is a governance problem as much as a security problem. Who controls the defensive AI? Under what authority does it operate? What are the boundaries of its defensive mission, and who enforces them when those boundaries get tested? What happens when a defensive AI system finds a vulnerability in allied infrastructure? What is the chain of accountability when the defender makes an error?
These questions don’t yet have answers. The fact that a consortium of the world’s largest technology companies is organizing around this problem suggests that at least some of them understand that the absence of answers is itself a risk.
The battle of AI against AI is coming. Governance infrastructure is part of what determines who wins it.
If you find this content valuable, please share it with your network.
Follow me for daily insights.
Book me to speak at your next event.
Chris Hood is an AI strategist and author of the #1 Amazon Best Seller Infailible and Customer Transformation, and has been recognized as one of the Top 30 Global Gurus for Customer Experience. His latest book, Unmapping Customer Journeys, will be published in 2026.