On March 27, 2026, Anthropic did something unprecedented in the history of frontier AI: they withheld their most powerful model from public release. Claude Mythos Preview — internally codenamed Capybara — was deemed too capable in offensive cybersecurity to ship without restrictions. This is the first time a major AI lab has taken this step, and it changes everything for the security industry.
The Bombshell Announcement
Every major AI lab has released increasingly powerful models on a predictable cadence. GPT-5, Gemini Ultra 2, Claude Opus — each one shipped within weeks of completion. Mythos Preview broke that pattern.
Anthropic's internal red team found that Mythos Preview could independently discover zero-day vulnerabilities in production software, chain multiple exploits into working attack sequences, and generate novel attack techniques that had never been documented. Not with heavy prompting or jailbreaks — through straightforward security research prompts.
The decision to withhold wasn't a marketing stunt. It was a calculated response to capability evaluations that exceeded Anthropic's own ASL-3 safety thresholds for autonomous cyber operations. Mythos Preview is available only through the Frontier Safety Program — vetted security researchers, defense contractors, and select partners.
The Raw Power of Mythos in Cybersecurity
What makes Mythos Preview qualitatively different from previous models isn't just benchmark scores — it's the nature of the capabilities. Here's what the evaluations revealed:
The benchmarks tell part of the story: Mythos Preview scored 91% on CyberSecEval 3 (up from 67% for Claude Opus), solved 73% of previously-unseen CTF challenges autonomously, and generated working exploits for 4 out of 5 planted vulnerabilities in a controlled codebase — all within single-session interactions.
Project Glasswing — The Defensive Lockdown
Anthropic didn't just restrict Mythos Preview and walk away. They launched Project Glasswing — a massive defensive initiative designed to ensure that AI capabilities tilt toward defenders, not attackers.
The Glasswing partner roster reads like a cybersecurity hall of fame:
The message is clear: Anthropic is putting serious money behind the idea that frontier AI should make defenders stronger, not just attackers more dangerous. The $100M in API credits alone means that security startups can build on Mythos-class capabilities without the capital requirements that would normally be prohibitive.
The Bitter Lesson of Simplicity
As Nate B Jones pointed out in his analysis, this moment crystallizes the “bitter lesson” that Rich Sutton articulated years ago: general methods that leverage computation always beat specialized approaches. For cybersecurity, this means the era of hand-crafted detection rules and signature-based systems is ending.
The models that will define the next decade of cybersecurity won't be the ones with the most hand-tuned features. They will be the ones with the most compute, the best training data, and the most capable reasoning. The “bitter lesson” applies to security just as it applies to chess, Go, and protein folding.
Why Traditional Security Tools Are Becoming Obsolete
Consider the current state of enterprise security: dozens of point solutions, each generating alerts that a human analyst must triage. The average SOC receives over 11,000 alerts per day. The average dwell time for an attacker inside a compromised network is still measured in weeks.
When a frontier model can autonomously discover zero-days and chain exploits, the attack surface doesn't just grow — it transforms. Static signatures can't detect novel attack patterns. Rule-based WAFs can't stop AI-generated payloads that are unique every time. SIEM correlation rules written by humans can't keep pace with an adversary that iterates at machine speed.
- Static signature matching
- Manual alert triage (11K/day)
- Weekly threat intel updates
- Rule-based correlation
- Reactive patching cycles
- Behavioral anomaly detection
- Autonomous triage & response
- Real-time threat synthesis
- Reasoning-based correlation
- Proactive vulnerability discovery
The Strategic Moment — Why We Must Build Now
The window between “frontier AI can break things” and “frontier AI defends everything” is where the opportunity lives. That window is open right now, and it won't stay open forever.
Here's why timing matters:
The Moment to Build Is Now
We are at an inflection point that happens maybe once a decade in technology. The equivalent of the cloud transition in 2010, the mobile explosion in 2008, the internet itself in 1995. Frontier AI has made cybersecurity simultaneously more dangerous and more defensible — and the builders who move first will define the next generation of security infrastructure.
The question isn't whether AI-native security tools will replace the current stack. The question is who builds them, and how fast.
If you're building at the intersection of AI and cybersecurity — or want to start — the resources, the funding, and the compute are all available right now. The only thing missing is builders.