DeepseekLiveAppeal 8.01 min read

DeepSeek V4 Challenges Frontier AI Costs

24 April 2026By Pulse24 desk
← Back
Share →

What happened

Chinese AI lab DeepSeek launched preview versions of its DeepSeek V4 models, V4 Flash and V4 Pro, both mixture-of-experts with 1 million token context windows. V4 Pro, featuring 1.6 trillion parameters (49 billion active), is now the largest open-weight model available, surpassing Moonshot AI’s Kimi K 2.6. DeepSeek claims both V4 models "almost closed the gap" with leading models on reasoning benchmarks; its V4-Pro-Max model outperforms GPT-5.2 and Gemini 3.0 Pro on some tasks, and both V4 models are comparable to GPT-5.4 in coding. While supporting text only and lagging 3-6 months in knowledge tests, the V4 Flash model undercuts GPT-5.4 Nano, Gemini 3.1 Flash, GPT-5.4 Mini, and Claude Haiku 4.5; the V4 Pro model undercuts Gemini 3.1 Pro, GPT-5.5, Claude Opus 4.7, and GPT-5.4.

Why it matters

Procurement teams and platform engineers can now access high-performance, large-context models at substantially reduced operational costs. DeepSeek V4's competitive pricing and claimed performance on reasoning and coding benchmarks challenge the economic calculus for deploying advanced AI. This offers a cost-effective alternative to closed-source frontier models, particularly for applications requiring extensive context windows or specific task optimisation, potentially influencing build-vs-buy decisions for model integration. This follows OpenAI's launch of its Frontier enterprise AI platform.

Source · techcrunch.comAI-processed content may differ from the original.
Published 24 April 2026