AI: China's Efficient Model Edge

AI: China's Efficient Model Edge

5 December 2025

The US approach to AI development, which favours large, complex models, may be less effective than China's focus on smaller, more efficient AI systems. Chinese companies, such as DeepSeek AI, are developing cost-effective, high-performance language models that rival the capabilities of larger models like GPT-4, but at a fraction of the training cost.

DeepSeek's models, including the DeepSeek-V3, utilise innovative architectures like Mixture of Experts (MoE) to achieve computational efficiency. This allows them to selectively activate parts of the model for each task, reducing resource usage. DeepSeek-V3, for example, has 671 billion parameters but only activates 37 billion per token during inference. This approach enables DeepSeek to train models with significantly less computing power and expense. China's strategy of prioritising efficient AI models could give it a competitive advantage, particularly in emerging markets.

While the US focuses on open innovation and the pursuit of Artificial General Intelligence (AGI), China's centralised, state-driven model emphasises practical AI deployment across various sectors. This pragmatic approach, combined with the development of efficient AI models, may prove to be a more effective strategy for widespread AI adoption and economic transformation.

Source:ft.com

AI generated content may differ from the original.

Published on 4 December 2025
aiartificialintelligenceintelligencechinadeepseekmodelstechnology
  • China's AI Training Migration

    China's AI Training Migration

    Read more about China's AI Training Migration
  • China Dominates Open AI

    China Dominates Open AI

    Read more about China Dominates Open AI
  • China's AI Ascendancy Looms

    China's AI Ascendancy Looms

    Read more about China's AI Ascendancy Looms
  • Senators Aim to Block Chip Sales

    Senators Aim to Block Chip Sales

    Read more about Senators Aim to Block Chip Sales