What happened
Mistral AI introduced the Devstral 2 series, comprising Devstral 2 (123B) and Devstral Small 2 (24B), as new coding-focused AI models. Devstral 2 is a 123B-parameter dense transformer supporting a 256K context window, achieving 46.8% on SWE-Bench Verified, exceeding prior open-source models by over 6%. These models are available under an Apache 2.0 license, enabling local deployment on hardware such as an RTX 4090 or Mac with 32GB RAM. The devstral-small-2505 model is also accessible via Mistral's API, priced at $0.1/M input tokens and $0.3/M output tokens. Mistral partnered with Kilo Code and Cline for this release.
Why it matters
The release of high-performance, locally deployable coding AI models under an Apache 2.0 license introduces a new operational constraint for IT security and platform operators. This increases exposure to unmonitored or unmanaged AI model deployments within development environments, raising due diligence requirements for tracking software supply chain dependencies and ensuring compliance with internal AI usage policies. The open-source nature and local execution capabilities create a visibility gap regarding model provenance and runtime behaviour outside centralised control.
Related Articles

Mistral AI's Model Trio
Read more about Mistral AI's Model Trio →
AI: China's Efficient Model Edge
Read more about AI: China's Efficient Model Edge →
Amazon Offers Free Kiro Access
Read more about Amazon Offers Free Kiro Access →
Cursor AI tackles complex coding
Read more about Cursor AI tackles complex coding →
