Anthropic limited access to Claude Mythos Preview through Project Glasswing, a defensive cybersecurity initiative for selected partners. The company says it created the program because a new frontier model showed vulnerability-discovery and exploit-development capabilities that could reshape software security.
The official framing is defense first. Anthropic says Project Glasswing participants will use Mythos Preview to find and fix vulnerabilities in critical software, with the company committing usage credits to the research preview. Its red-team write-up describes internal cases where the model could work autonomously on serious vulnerability discovery tasks.
Why it matters
This is a different release pattern from a normal chatbot launch. Anthropic is not merely gating a model for capacity or commercial reasons; it is treating cyber capability as a deployment risk that needs partner testing, access controls, and operational lessons before broader availability.
The page should still avoid turning internal claims into independent proof. The key buyer takeaway is narrower: frontier models are starting to affect vulnerability research, and AI vendors may increasingly split powerful capabilities into controlled previews rather than general releases.
Tool impact
For Claude, Mythos strengthens Anthropic’s reputation for frontier capability and safety caution at the same time. Security teams should watch the program for defensive tooling lessons, but ordinary Claude users should not assume these capabilities are available in standard Claude products.
Sources
Primary and corroborating references used for this news item.
Spotted an error or want to share your experience with Anthropic restricts Mythos Preview through Project Glasswing?
Every tool page is re-verified on a recurring cycle, and corrections land faster when readers flag them directly. If you spot a stale fact, a missing capability, or have used Anthropic restricts Mythos Preview through Project Glasswing and want to share what worked or didn't, the editorial desk reviews every message sent through this form.
Email editorial@aipedia.wiki