Back to articles

Anthropic's Model Is Too Dangerous to Ship. They Shipped It Anyway — to 12 Companies.

May 10, 2026
Anthropic's Model Is Too Dangerous to Ship. They Shipped It Anyway — to 12 Companies.

Claude Mythos found thousands of zero-days in every major OS and browser before it ever went public. Project Glasswing is what responsible AI deployment looks like when the capability is genuinely scary.

The Model That Can Break Everything Is Now Patching It

Anthropic just handed twelve of the world's largest technology companies access to a model they haven't publicly released — because that model is too good at breaking things to hand to everyone at once. Claude Mythos Preview autonomously identified and exploited a 17-year-old remote code execution vulnerability in FreeBSD before lunch. It found thousands of zero-days across every major operating system and every major browser. The fix strategy? Don't release it openly. Give it to AWS, Apple, Cisco, Google, JPMorgan, Microsoft, NVIDIA, and a handful of others, call it Project Glasswing, and let the defenders get a head start.

At Kuaray, we've been watching the AI security capability curve for two years. This is the week it became someone else's problem to manage, and your problem to understand.

What Glasswing Actually Is (and Isn't)

The framing matters. Project Glasswing is not a bug bounty program. It's not a red team exercise. It is Anthropic deliberately gating a frontier model behind a closed security consortium because they've assessed the offensive capability as high enough to require it.

The facts on the table:

What happenedWhy it matters
Claude Mythos Preview found CVE-2026-4747 — a 17-year-old RCE in FreeBSDThe vulnerability existed for 17 years. A model found it in an autonomous session.
Thousands of zero-days found across all major OSes and browsersNot one. Not dozens. Thousands — before any public release.
Launch partners include AWS, Apple, Cisco, Google, JPMorgan, NVIDIA, MicrosoftEvery hyperscaler and half the Fortune 10 is in the consortium.
Mythos Preview remains unreleased to the general publicThis is the most capable model Anthropic has built, and they're intentionally not shipping it.

Read that last row carefully. Anthropic is sitting on a model they believe is commercially valuable, that their competitors are racing toward — and they chose not to ship it because the security surface is too large. That's either the most responsible thing a frontier AI lab has ever done, or the most expensive PR move in tech history. Probably both.

The Part Nobody Is Writing About: You're Not In the Consortium

Here is the uncomfortable reality for most CTOs reading this: your organization is not AWS, not Google, not JPMorgan. You are not getting early access to Mythos Preview. You are not in the room where the defenders are being armed.

What that means in practice: the same offensive capability that Glasswing partners are now using to harden their infrastructure will eventually be available to general users — including adversaries. The timeline between "responsible disclosure to partners" and "widely available model" is measured in quarters, not years. Your patch window is closing whether you're watching it or not.

Three things this should move on your priority stack:

1. Your CVE backlog is about to become a liability, not a nuisance. If a Mythos-class model can discover thousands of zero-days autonomously, so can its descendants — running in the hands of a threat actor who spent $20 on API credits. Every known vulnerability you haven't patched is now a ticking asset. Not because AI is new, but because the discovery cost just collapsed to near zero.

2. Automated security scanning is no longer optional infrastructure. Your CI/CD pipeline should already be running static analysis, SAST tooling, and dependency vulnerability checks at every merge. If it isn't — if you're still running quarterly pen tests and calling it a security program — this is the week to have that budget conversation. A model that can autonomously exploit RCE vulnerabilities doesn't care how thorough your Q2 audit was.

3. Your threat model needs an LLM-assisted attacker baked in. The classic threat model assumes a human adversary with finite time and skill. That assumption is wrong now. Rewrite the model to assume an attacker with Claude Sonnet 4.6-level capability (the public one) on a tight loop against your attack surface. If that thought is uncomfortable, it should be — because that's the threat you're already defending against, whether your security team knows it or not.

What Engineering Leaders Can Do This Week

You don't have to be in the Glasswing consortium to start acting like the threat environment it was designed for.

  • Run an autonomous LLM against your own attack surface — before someone else does. Tools like Nuclei, OSS Fuzz, and emerging LLM-augmented scanners can approximate what Mythos-class models will do at scale. Do it to yourself on a test environment. Be horrified before an adversary is delighted.
  • Prioritize CVEs by exploitability in an AI-assisted attack scenario. CVSS scores were calibrated for human attackers. An AI that can chain exploits across multi-step attack graphs changes the severity ranking. Work with your security team to re-triage your open CVEs against this lens.
  • Ask your vendors which Glasswing partners they depend on. If your SaaS stack runs on AWS, Google Cloud, or Azure — congratulations, your underlying infrastructure is getting Mythos-hardened right now. If your stack runs on a niche provider not in the consortium, that's a supply chain risk worth naming explicitly.

The model that can break everything is currently being used to fix it. The window before that model — or something like it — is being used to break your things is finite.

Contact us for a strategic consultation — we help engineering leaders model the AI-assisted threat landscape and build security programs that don't assume last year's adversary.