Agentic AI, not consumer chatbots, will determine whether trillion-dollar investments in large language models (LLMs) ever translate into sustainable profits, according to GlobalData.
As enterprise deployments trigger a surge in application programming interface (API) calls and token consumption, energy intensity and infrastructure economics emerge as decisive variables, reshaping competitive dynamics and defining which players capture value in the next phase of the AI investment cycle, according to the intelligence and productivity platform.
The vision behind the large capital expenditure (capex) budgets is that the world is moving toward an AI-native reality in which generative AI, agentic AI, and machine learning are integrated into enterprises’ future operations and processes.
GlobalData has created a financial model to understand the role that consumer and enterprise generative AI adoption can play in generating sufficient revenue and operating margin to start generating genuine profits for the owners of frontier LLMs.
The company’s latest Strategic Intelligence report, “The AI Journey – From Generative to Agentic,” makes the case that agentic AI is the only way for the AI industry to generate profitability.
While consumer adoption is important because it will generate subscription fees, it is the usage fees sold as tokens to enterprises that hold the key to generative AI’s profitability.
Enterprises will deploy agentic AI software that, over time, will increasingly use reasoning LLMs to carry out complex, intelligent automated workflows.
In the next two to four years, enterprises will be making tens of thousands of API calls to LLMs daily, generating millions, billions, and eventually trillions of tokens per day.
It is this kind of volume level that is needed to turn a profit on trillions of dollars in capex on AI data centers.
William Rojas, director of tech research: Strategic Intelligence at GlobalData, notes: “At the heart of this study’s analysis is the role that energy consumption plays in the generative AI business model.
“Energy consumption measured in watts per prompt is directly linked to the number of computations (such as floating point operations per second, or FLOPs) required by LLMs and to the number of tokens generated.”
For example, typically two FLOPs are required per parameter in the LLM, and models such as ChatGPT-5 and DeepSeek V1 have between 1-trillion and 2-trillion parameters.
That means, even with advanced techniques used to reduce the computational burden, approximately 100-billion to 200-billion parameters will still need to be computed for each token.
As the industry moves toward reasoning models and the context window expands, the number of tokens per prompt will increase 10-fold or more. It is not an overstatement to call this a token explosion.
Rojas adds: “In terms of winners and losers, providers of hardware and facilities for constructing and operating AI data centres are well-positioned to continue reaping the financial benefits of the capex boom.
“That said, the LLM model owners are not sitting on a profit-making machine; they are currently losing money due to rising token-processing costs.”
The generative AI business model is unique in that the energy consumption is a critical factor in determining the net margins because the number of tokens processed per prompt is growing rapidly and is not expected to stop growing any time soon.
Rojas concludes: “The semiconductor industry is working overtime to enhance the cost performance of graphics processing units (GPUs), high-bandwidth memory, and data center server-to-server networking, but it does feel a little bit like the Greek mythological figure, Sisyphus, who would push a rock to the top of the hill only have it fall back down.
“As generative AI continues to increase in its capabilities and complexity, the quest for more and more cost performance will not reach its end anytime soon.”