Beyond LLMs: AI Evolution

Beyond LLMs: AI Evolution

21 November 2025

What happened

The AI landscape is diversifying beyond Large Language Models (LLMs), which exhibit limitations in reasoning, real-time updates, contextual understanding, bias, high computational costs, and inaccuracies. New alternatives include Liquid Learning Networks (LLNs) offering continuous, real-time adaptation; Small Language Models (SLMs) requiring less compute and reducing hallucinations; and logical reasoning systems processing data based on explicit logic. Open-source models (Google's Gemini, Meta's LLaMa, OpenAI's GPT-oss) provide customisation and cost-effectiveness, while AI physics models accelerate design simulations. These models complement LLMs, addressing specific weaknesses.

Why it matters

The emergence of diverse AI model types introduces a significant operational constraint by increasing the complexity of AI solution design and deployment. This diversification raises due diligence requirements for IT architecture, procurement, and data governance teams, who must now evaluate a broader spectrum of specialised models beyond LLMs. The burden shifts to these roles to assess the unique strengths, limitations, and integration challenges of LLNs, SLMs, logical reasoning systems, and open-source options to ensure optimal fit and mitigate new forms of model-specific risks.

Source:ft.com

AI generated content may differ from the original.

Published on 21 November 2025

Subscribe for Weekly Updates

Stay ahead with our weekly AI and tech briefings, delivered every Tuesday.

Beyond LLMs: AI Evolution