Anthropic just built an AI so powerful they won’t let anyone use it. And I have thoughts.
What Is Claude Mythos?
On April 7, 2026, Anthropic announced Claude Mythos Preview — and then immediately said you can’t have it. This isn’t just another AI model launch. This is something fundamentally different.
Mythos is Anthropic’s most capable model ever. During testing, it discovered thousands of zero-day vulnerabilities across every major operating system and browser — including bugs that had gone undetected for decades. We’re talking about security flaws that armies of human researchers missed, found in minutes by an AI.
The model comes with a 244-page system card detailing its capabilities and risks. Two hundred and forty-four pages. That’s not a product announcement — that’s a warning label.
Project Glasswing: The 11-Company Firewall
Instead of releasing Mythos to the public, Anthropic created Project Glasswing — a cybersecurity initiative that gives access to exactly 11 partner organizations: Microsoft, Amazon, Apple, CrowdStrike, and a handful of others. That’s it. Not you, not me, not the security researcher who could actually use this tool to protect people.
Anthropic’s reasoning is that Mythos is too dangerous for public release. It can find vulnerabilities faster than they can be patched. In the wrong hands, it wouldn’t just find bugs — it could exploit them. The potential for misuse is enormous.
And I get it. I really do. An AI that can find zero-days in minutes is a double-edged sword — the same capability that helps defenders also helps attackers. Anthropic is being cautious, and in a world where AI companies race to ship products regardless of consequences, that caution feels almost refreshing.
But Here’s What Bugs Me
While Anthropic is locking Mythos behind a corporate firewall, the open-source community responded the same day. Zhipu AI released competing models. Other labs are surely working on similar capabilities. The cat is out of the bag — or it will be soon.
So what Anthropic is really doing isn’t preventing dangerous AI from existing. They’re preventing themselves from being the ones who release it. That’s not the same thing as keeping people safe. It’s liability management dressed up as altruism.
And there’s something deeply uncomfortable about a world where the most powerful AI tools are only available to a handful of megacorporations. Microsoft, Amazon, Apple — these are the same companies that have their own track records of surveillance, market dominance, and user exploitation. Why should they get access to tools that independent security researchers can’t use?
The Claude Code Leak: Irony on Top of Irony
As if the situation wasn’t complicated enough, Anthropic accidentally leaked 512,000 lines of Claude Code source code via npm. The leak revealed 44 hidden features. This is the same company that says Mythos is too dangerous to release — and they can’t even keep their own code from leaking on a public package manager.
If Anthropic can’t secure their own source code, how confident should we be that they can secure an AI model that finds zero-days for a living? The irony is thick enough to cut with a knife.
What Mythos Can Actually Do
Let me be clear about why this matters. Mythos isn’t just “a bit better” than existing models. According to Anthropic’s own system card and reporting from Reuters, TechCrunch, and CNBC:
- It found thousands of zero-day vulnerabilities across major operating systems and browsers
- Some of these bugs had gone undetected for decades
- It can identify security flaws in minutes that would take human researchers months or years
- It’s designed for defensive cybersecurity — finding vulnerabilities before attackers do
- But the same capability could be flipped for offensive use — finding vulnerabilities to exploit
This is the dilemma. The defensive potential is enormous — imagine every piece of software being audited by an AI that finds every bug. But the offensive potential is terrifying — imagine the same tool in the hands of a nation-state or a ransomware gang.
The Bigger Question: Who Gets Access to Power?
What really gets me about Mythos isn’t the technology itself. It’s the question of access. Anthropic has decided that only 11 hand-picked companies deserve to use the most powerful security tool ever created. Everyone else — independent researchers, small security firms, open-source projects, the people who actually maintain the software that Mythos finds bugs in — they don’t get access.
This creates a two-tier system: megacorporations get the best security tools, and everyone else is left vulnerable. That’s not safety. That’s privilege.
And let’s be honest about the business angle too. Anthropic is positioning Mythos as a premium enterprise tool. Project Glasswing isn’t charity — it’s a sales channel. The “too dangerous to release” framing doubles as excellent marketing for a product that costs millions to develop.
What I Think Should Happen
I don’t think Mythos should be released to everyone tomorrow. That would be reckless. But I also don’t think locking it behind 11 corporate doors is the right answer either.
What I’d like to see:
- Graduated access — Start with trusted partners, then expand to verified security researchers, then to open-source maintainers who need it most
- Responsible disclosure pipeline — When Mythos finds a vulnerability, it should be reported to the maintainers, not just hoarded by partners
- Transparency — Anthropic should publish regular reports on what Mythos finds and how those findings are being used
- Independent oversight — Not just Anthropic deciding who gets access, but an independent body with public interest representation
The current model — “trust us, we’re the good guys” — isn’t good enough. Not when the stakes are this high, and not when your own source code is leaking on npm.
The Bottom Line
Claude Mythos is the most interesting AI story of 2026, and not because of what the model can do. It’s interesting because of what it represents — the moment when AI capability outpaced our frameworks for managing it.
An AI that can find every vulnerability in every piece of software is either the best thing to ever happen to cybersecurity or the worst thing to ever happen to digital safety, depending entirely on who gets to use it. And right now, Anthropic has decided that answer is “11 companies we like.”
I don’t trust that answer. And I don’t think you should either.
The real test isn’t whether Mythos works. It’s whether we can build systems that distribute its power fairly, rather than hoarding it behind corporate firewalls. Until then, “too dangerous to release” will just mean “too profitable to share.”