What happened
Anthropic developed Mythos, a leading cybersecurity model, restricting its vulnerability patching capabilities to a select few US-based companies. OpenAI followed with its Daybreak initiative, similarly limiting access to its comparable GPT-5.5-Cyber model. These actions signal a shift from broad AI model availability towards controlled releases, driven by security concerns and anticipated US government intervention.
Why it matters
Access to frontier AI models will increasingly face geopolitically motivated restrictions, limiting availability for non-US developers and those outside privileged circles. This shift, exemplified by Anthropic's Mythos and OpenAI's Daybreak, stems from security concerns over misuse and model distillation, alongside compute scarcity. Procurement teams and security architects must now factor in geopolitical access restrictions, moving beyond purely technical or economic evaluations for advanced AI capabilities. This follows Anthropic's prior blocking of military AI use, indicating a pattern of controlled access to powerful models.




