A 2025 Gartner survey of 302 cybersecurity leaders revealed that deepfake incidents are increasingly impacting organisations, with 43% reporting at least one audio call incident and 37% experiencing deepfakes in video calls.

CIOs and security leaders who underestimate these risks put their organisations and careers in jeopardy. Yet, many still struggle to monitor and protect digital identities, leaving critical gaps in their security strategies.

Apeksha Kaushik, principal analyst at Gartner, unpacks how the pace of AI-powered attacks are picking up steam and how CIOs can proactively defend against this threat and safeguard their organisations and their digital identities.

 

How is AI accelerating the spread of disinformation?

AI-driven disinformation campaigns and deepfakes are increasingly used by adversaries to access sensitive data, disrupt operations, manipulate public opinion and pursue financial or political gain.

Sixty-two percent of organizations have experienced at least one deepfake attack in the last 12 months that included some form of social engineering or exploited existing automated processes, according to the Gartner cybersecurity leaders survey.

Gartner predicts that by 2027, AI agents will halve the time needed to exploit account takeovers, giving organisations even less time to respond to these threats. AI agents will automate more steps in the account takeover kill chain, including using deepfake voices to make social engineering more convincing and compromising authentication channels.

Attackers leverage GenAI tools to create deepfake content to launch attacks either on existing automated processes (voice or face recognition) or on employees. Not surprisingly, leaders are worried. A 2024 Gartner survey of 456 CEOs and other senior business executives worldwide revealed that 62% think deepfakes will create at least some operating costs and complications for their organizations in the next three years.

 

What are the business and leadership risks of ignoring disinformation security?

Ignoring disinformation security is a direct threat to business continuity and leadership credibility. Leaders risk inflicting severe reputational damage on themselves and their organisations, eroding stakeholder trust and triggering regulatory scrutiny in scenarios like data breaches or compliance failures.

The digital attack surface, including brand assets, executive profiles and sensitive data, is an easy target for adversaries – especially when organisations do not proactively monitor and manage their digital assets as part of attack surface management.

Publicly accessible assets such as social media accounts, executive photos and corporate logos are especially vulnerable to misuse in impersonation attacks, including fake websites and fraudulent content.

False narratives about the enterprise or its executives can quickly spiral across digital platforms, making it nearly impossible to rectify or track the origin of misinformation once it spreads.

 

What should organisations do to combat AI-generated disinformation?

Gartner predicts that, by 2027, 50% of enterprises will be investing in disinformation security products or services and TrustOps strategies, up from less than 5% today.

Threat intelligence and digital risk protection services (DRPS) are already components of disinformation security. Now, an emerging tactic called narrative intelligence is expanding disinformation security’s capabilities by offering a more proactive defense capability.

Narrative intelligence looks beyond the organisation and its technical vulnerabilities to uncover perception-based — and even latent — threats because it understands how and why disinformation spreads. It identifies and counters AI-generated disinformation campaigns before they cause operational or reputational harm.

This preemptive monitoring and action is critical given the speed and scale at which AI can generate and distribute disinformation.

Effective narrative intelligence enables organisations to anticipate and outmaneuver disinformation threats, gaining a strategic edge over less prepared competitors. By evading the inertia and harm left in the wake of widespread disinformation, organisations can protect perceptions and trust in their brand before it’s under attack.