The US approach to AI development, which favours large, complex models, may be less effective than China's focus on smaller, more efficient AI systems. Chinese companies, such as DeepSeek AI, are developing cost-effective, high-performance language models that rival the capabilities of larger models like GPT-4, but at a fraction of the training cost.
DeepSeek's models, including the DeepSeek-V3, utilise innovative architectures like Mixture of Experts (MoE) to achieve computational efficiency. This allows them to selectively activate parts of the model for each task, reducing resource usage. DeepSeek-V3, for example, has 671 billion parameters but only activates 37 billion per token during inference. This approach enables DeepSeek to train models with significantly less computing power and expense. China's strategy of prioritising efficient AI models could give it a competitive advantage, particularly in emerging markets.
While the US focuses on open innovation and the pursuit of Artificial General Intelligence (AGI), China's centralised, state-driven model emphasises practical AI deployment across various sectors. This pragmatic approach, combined with the development of efficient AI models, may prove to be a more effective strategy for widespread AI adoption and economic transformation.




