The buzz on the generative AI (GenAI) scene is around reasoning – but reasoning models are not without limitations. They require more processing power and more tokens than standard models, making them more expensive.

In addition, the need for additional computation can also increase latency. And, as with all large language models (LLMs), they suffer from traditional challenges. They can still hallucinate, and their results are only as good as the data used to train them, says GlobalData.

OpenAI, Google, DeepSeek, IBM and Anthropic have all released reasoning models.  More recently, Europe’s Mistral brought out its first reasoning model, Magistral.

Rena Bhattacharyya, chief analyst and practice lead for Enterprise Technology and Services at GlobalData, comments: “Reasoning by LLMs is supposed to emulate the way in which humans think, which includes breaking complex tasks down into more manageable, smaller steps, which yields better results. The model learns as it goes, honing its chain of thought, correcting mistakes, and trying different approaches, leading to improved performance over time.”

GlobalData notes, in addition to greater accuracy, a significant benefit of reasoning is that it provides greater transparency, giving users insight into how LLMs craft results.  Often, users are reluctant to act on AI-driven findings because they do not understand how models arrive at their results.  One of the biggest challenges to scaling AI deployments is establishing trust in model outcomes.

Bhattacharyya notes: “Even in the days of traditional AI, model explainability was a top concern. The hope is that by providing a view into the intermediate steps taken by a model to arrive at its final output, end users will have greater confidence in its results.”

However, not all workloads require a reasoning modeI. Many are well served by traditional models that are faster and require less computation power.

Bhattacharyya concludes: “Enterprises will need to balance speed, cost, and accuracy while developing their AI strategies and roadmaps. They should plan to use a mixture of models depending on workload requirements. As Europe looks to develop regional AI capabilities, it is paramount that its flagship provider of LLMs is developing models that reflect the latest technology trends.”