By 2026, 80% of large enterprise finance teams will rely on internally managed and owned generative AI platforms trained with proprietary business data, according to Gartner.

“The recent entry of large, well-established companies into the generative AI market has kicked off a highly competitive race to see who can deliver revolutionary value first,” says Mark McDonald, senior director analyst in the Gartner Finance Practice. “Leadership teams do not want to fall behind peers; however, as the chief steward for an organization’s financial health, CFO’s must balance the risks and rewards of tools like generative AI.

“There are three distinct conversations that CFOs should have across leadership circles to ensure that reasonable expectations are established, and the use of generative AI creates value without introducing unacceptable risks.”

These conversations that CFOs must conduct include:

Discussion #1: Debunk the hype to avoid inflated expectations

Generative AI presents the potential for businesses to comprehensively navigate their data’s growing complexity and volume with ease. However, the technology’s limitations introduce several real challenges to this objective, leading Gartner to consider it at a peak of inflated expectations.

CFOs should partner with senior technology leadership (such as the CIO, chief data officer, chief information security officer) to distinguish hype from reality and share results with other executive leadership team members.

Current generative AI solutions represent a collection of modern innovations, including deep learning, natural language processing, reinforcement learning and graph networks, all of which deliver remarkable outcomes. However, the extensive number of parameters and connections used to create these outputs prevent any transparent reconciliation of the algorithm’s response.

This observation includes an inability to determine if the algorithm has developed unstated objectives or if it is basing conclusions on inaccurate, irrelevant, unethical, or even illegal information.

“Such limitations form the backbone of conversations that CFOs must have with leadership circles when considering the use of generative AI,” says McDonald.

Discussion #2: Define generative AI use cases that are aligned, responsible and actionable

With an understanding of generative AI’s limitations, CFOs can responsibly direct a conversation with management teams aimed at defining use cases. They must collaborate with operational management, executive leaders, and representatives from the user community to define actionable generative AI use cases that align with the organization’s overall strategy and risk tolerance.

“As with any AI solution, the best use cases exploit a specific business’s strengths and defend its weaknesses,” says McDonald. “Copying use cases from other companies will likely not have the same impact in an organisation with different circumstances.

“Instead, aligning generative AI’s fundamental capabilities to a business’s unique strategies and objectives delivers a value that differentiates a company from its competitors.”

Discussion #3: Develop generative AI governance and guidelines for acceptable use

Generative AI requires human oversight to ensure that outcomes adhere to the nuance of human judgment and fairness. While generative AI’s output may appear human-like and compelling, the results may not always be accurate, unbiased, or reliable.

“CFOs should engage legal, HR, audit, security, and other relevant corporate support functions to establish usage guidelines to minimize security, compliance, regulatory and other intellectual property risk,” says McDonald. “This discussion must also include the potential impact to the workforce, company culture and necessary training.”