Meta's Maverick AI Underperforms

Meta's Maverick AI Underperforms

13 April 2025

Meta's Llama 4 Maverick, a new AI model, has demonstrated lower performance compared to its rivals in a key chat benchmark. While Meta initially kept the specific scores under wraps, the model's standing relative to competitors has become a point of discussion within the AI community. Llama 4 Maverick is a general-purpose model with 17 billion active parameters and 400 billion total parameters. It was designed for strong image and text understanding, making it suitable for chat applications.

Despite its intended capabilities, benchmarks reveal that Maverick lags behind other recent AI models in certain areas, particularly coding. However, it's worth noting that Maverick achieves these results with fewer active parameters than some competitors, such as Gemma 3 27B and Qwen 2.5 (32B). Meta claims that Llama 4 Maverick beats GPT-4o and Gemini 2.0 Flash across a broad range of widely reported benchmarks, while achieving comparable results to the new DeepSeek v3 on reasoning and coding—at less than half the active parameters. The model's performance-to-cost ratio and fast response times make it a competitive option for writing and creative tasks.

Published on 11 April 2025

AI generated content may differ from the original.