Story

Anthropic announced Claude Mythos Preview on April 7, 2026, a frontier AI model so powerful that the company chose not to release it publicly due to security concerns. The model can autonomously identify previously unknown vulnerabilities, generate working exploits, and carry out complex cyber operations with minimal human input. Testing revealed numerous related weaknesses across widely used systems, though these results require further validation. This decision marks a critical shift in AI deployment: constraints are now security-driven rather than commercial. The World Economic Forum reports that frontier AI is expanding both defensive capabilities and potential cyber risks, with tasks that once required specialized teams working for weeks now executable in hours. Market reaction has been significant, with reports linking Mythos-related fears to a $2 trillion wipeout in global technology stocks. US officials have begun urging major financial institutions to test advanced AI systems like Mythos in controlled environments. The episode underscores a new reality: AI capability is advancing faster than the ability to safely govern it, making security the primary gatekeeper for AI release decisions. World Economic Forum reported the core details. MIT Technology Review noted additional context. TechCrunch added corroborating details.