Microsoft has unveiled two new AI models, MAI-Voice-1 and MAI-1-preview, signalling a move towards greater independence in AI development. MAI-Voice-1 is a speech generation model that can produce a minute of audio in under a second on a single GPU, making it highly efficient. It is already integrated into Copilot Daily and Podcasts. MAI-1-preview, a text-based model and Microsoft's first foundation model trained end-to-end, is currently undergoing public testing on LMArena.
MAI-1-preview was trained using approximately 15,000 NVIDIA H100 GPUs and is designed to provide helpful responses to everyday queries. Microsoft plans to integrate MAI-1-preview into Copilot for certain text-based applications in the coming weeks. While Copilot currently relies on OpenAI's GPT technology, Microsoft aims to leverage its own AI models and infrastructure to power its products.
Microsoft's move to develop in-house AI models reflects a strategic effort to reduce reliance on third-party providers and gain greater control over its AI capabilities. The company is investing heavily in AI, with plans for further model development and infrastructure expansion, including next-generation GB200 clusters. This positions Microsoft to compete more directly with OpenAI and other AI leaders.
Related Articles
Microsoft Tests In-House AI
Read more about Microsoft Tests In-House AI →Copilot Manages Nadella's Workload
Read more about Copilot Manages Nadella's Workload →India: AI's New Epicentre
Read more about India: AI's New Epicentre →ChatGPT Model Selection Returns
Read more about ChatGPT Model Selection Returns →