DeepSeek debuts Sparse Attention

DeepSeek debuts Sparse Attention

30 September 2025

DeepSeek has launched an updated experimental AI model, the DeepSeek-V3.2-Exp, showcasing a 'sparse attention' method as a step towards next-generation AI. The Chinese startup's new technique, called DeepSeek Sparse Attention (DSA), is designed to improve efficiency when processing long text sequences.

The latest model builds upon the V3.1, introducing a mechanism to optimise AI training and operation. DeepSeek has indicated that the newest versions of its models support FP8 architecture and are working on supporting BF16, which balances speed with accuracy and makes it easier to run big models on limited hardware. DeepSeek is also cutting API prices by 50% to attract more users.

Hangzhou-based DeepSeek described the model as an advancement in its next-generation AI lineup. Huawei has announced its products will support DeepSeek's latest model update. The company is working with Chinese chipmakers on the model.

AI generated content may differ from the original.

Published on 30 September 2025

Subscribe for Weekly Updates

Stay ahead with our weekly AI and tech briefings, delivered every Tuesday.

DeepSeek debuts Sparse Attention