Investments in AI are at an all-time high with the market set to double to $300-billion by 2027, but chief financial officers (CFOs) adopting the technology are unlikely to realize the anticipated enterprise benefits unless they mitigate four common stalls that hinder AI adoption, according to Gartner.

“Gartner has been working with 80 000 executives around the globe to figure out the right use cases, to improve data, and to get teams ready for the new era of AI,” says Clement Christensen, senior director analyst: research in the Gartner Finance practice. “However, as enterprises continue to pursue AI we see cracks emerging: four enterprise-level organisational challenges in particular that we call the ‘AI Stalls’.”

AI Stalls are common problems with the ways organisations use AI rather than problems with the technology itself and can cause significant delays in the adoption and return on investment of AI. The four AI stalls are: cost overruns, misuse in decision making, loss of trust, and rigid mindset.

“These stalls will be pervasive across most organisations of all sizes and industries from now through 2030. The time to course correct these stalls is now, and CFOs have a vital role in the enterprise in identifying and counteracting these stalls before they become a reality,” says Nisha Bhandare, vice-president analyst: research, in the Gartner Finance practice.

Cost Overruns

“There’s a uniqueness to AI costs. Given how new AI is, CFOs don’t really know how much it costs: they are learning as they go, driving cost estimates off by 500-1000%,” says Bhandare. “Initial rollout costs, such as infrastructure, user licenses, hiring new talent and implementation costs, are something CFOs are aware of and are not different to other technologies.”

However, Bhandare explains that there are two buckets of costs that are new with AI initiatives, and that CFOs must uncover these with each new investment.

First, there will be the ongoing cost of maintaining the AI models: keeping it running, ensuring it’s compliant. It’s the cost for data cleansing. There are also some surprise costs, such as environmental costs of running large language models.

GenAI comes with its own uniqueness – usage cost per query, per employee. This is where most of the volatility in cost projections arise – especially as organizations mature from basic to more advanced AI use cases.

The second bucket of costs new with AI initiatives is “cost of experimentation” or sunk cost. Unlike other technologies, AI follows an experimentation process: start small and keep training the AI model. With experiments, there are failures due to low adoption or from choosing the wrong use case.

Misuse in Decision-Making

“Most of the CFO’s enterprise colleagues – such as business decision-makers in marketing, sales, and supply chain are excited about the benefits of automation, and they will likely overestimate AI’s intelligence,” says Bhandare. “They’ll want to go to an automation solution right away, instead of a trial period using more of a decision support or augmentation approach.”

Good CFOs will need to pace their organization’s adoption of AI to avoid the disillusionment that can arise from inflated expectations. There is a natural maturity progression from decision support, to augmentation, to automation with nearly any AI use case.

Automating decisions too fast is likely to lead to bad results. It’s also important to establish a process to periodically review the performance of automated decisions because AI systems need to be retrained and tinkered with on a regular basis.

Loss of External Trust

As a significant point of contact for investors, regulators and customers, CFOs have an important part to play in managing how an organization’s use of AI is perceived externally. It’s important CFOs ensure that the investments their company is making do not break the trust built with external parties.

“When the data that AI is using to interact with external parties is biased or insecure, when the model is not updated to reflect current regulations, or when employees are not skilled to explain AI results to their customers: these failure points can lead to AI providing information that is incorrect, biased, or simply contrary to the company’s culture. This will erode the trust organizations have built with their stakeholders,” says Christensen.

Rigid Mindset

When properly implemented, AI will perform some tasks better than humans. Framing this as simply a set of lower value tasks that human employees will no longer carry out is frightening for employees, who tend to perceive it as replacement of human with machine and exhibit change resistance.

“The mistake CFOs often make is that while they tell employees what they wanted them to stop doing, they don’t properly identify what they wanted them to start doing, or provide any support for new ways of working,” says Christensen. “Rather than just asking ‘Is the tool easy to use?’ ask, ‘How will staff react to the use of AI, and how are we planning for their response?’.”