Tencent's Hunyuan-MT-7B translation model has demonstrated impressive capabilities in a recent machine translation competition. The model secured first place in 30 out of 31 language categories at the WMT25 conference, outperforming other models of comparable scale.
The Hunyuan-MT model suite includes Hunyuan-MT-7B and Hunyuan-MT-Chimera, supporting translation across 33 languages. The 'Chimera' model employs an ensemble learning approach to generate and integrate multiple translations for enhanced quality. Tencent employed a comprehensive training framework, incorporating techniques from pre-training to ensemble refinement, to achieve state-of-the-art results. This involved using datasets of text in 33 languages and millions of translated pairs, alongside reinforcement learning guided by an AI model that assesses translation quality.
The WMT25 competition serves as a platform for researchers to showcase advancements in computer-aided translation. Tencent's success highlights the progress and potential of its open-source translation models.