With emerging legal and regulatory uncertainties, geopolitical and economic hurdles, and the dual nature of AI as both a potential threat and a valuable business asset, chief audit executives (CAEs) are facing increased pressure by the board to provide assurance over risk management in 2025, according to Gartner.
“2025 brings more high-profile risks and opportunities that are driving growing board focus on risk management, so CAEs need to be sure they are effective in helping the audit committee (AC) discharge its risk oversight responsibilities,” says Margaret Moore Porter, distinguished vice-president and chief of research in the Gartner Assurance Practice.
ACs need more risk insight from audit to support the board’s oversight responsibilities: in particular on systemic governance issues and the highest impact emerging risks, such as AI.
“CAEs typically get less than 30 minutes with the AC during formal presentations,” says Porter. “They must quickly focus on the information the AC needs most: currently that relates to emerging high impact risks such as AI and any systemic governance issues.”
CAEs should prioritize highlighting risk trends, root causes, and systemic governance issues in their communications with audit committees. Supplemental materials can be used to provide detailed background information on specific risks and routine functional updates. This approach allows CAEs to maximise their limited time, focusing on the risks that are of greatest interest to ACs.
AI Risks
“AI has burst onto the business scene with the arrival of numerous public generative AI tools,” says Porter. “What is perhaps most difficult for internal audit, other than the rapid adoption of the technology, is that AI risks manifest in complex and varied ways. Therefore, audit leaders are facing heightened pressure to ensure audit coverage of the new technology.”
AI risks can take on many forms, including behavioural risks, transparency risks, and security and data risks:
- Behavioral risks are related to the ways algorithms and IT systems can misbehave in their performance, such as by creating inaccurate or biased results, providing outdated information or not complying with scoping requirements.
- Transparency risks are related to model explainability and disclosure of AI involvement.
- Security and data risks are related to the ways in which accidental or intentional leakage or misuse of personal or confidential information can impact the enterprise.
“While most audit leaders accept it is important to cover key AI risks in the next 12 months, less than a quarter feel confident in their ability to do so,” says Porter. “To increase their confidence in providing assurance over complex AI risks, audit should collaborate with assurance partners to assess and prioritise AI risk coverage needs.”
To better support the organisation in managing and assessing AI risks, Gartner experts recommend internal audit work with legal, compliance, and risk teams to:
- Get organised for AI accountability and define enterprise practices
- Discover and inventory all AI used in the organisation
- Revisit and implement AI data classification, protection and access management
- Implement technical controls to support and enforce policies
- Conduct ongoing governance, monitoring, validation, testing and compliance throughout the whole process