At least 50% percent of all enterprise cybersecurity incident response efforts will focus on incidents involving custom-built AI-driven applications by 2028, according to Gartner.

Christopher Mixter, vice-president analyst at Gartner, explains: “AI is evolving quickly, yet many tools – especially custom-built AI applications – are being deployed before they’re fully tested. These systems are complex, dynamic and difficult to secure over time.

“Most security teams still lack clear processes for handling AI-related incidents, which means issues can take longer to resolve and require far more effort.”

Gartner recommends security leaders get involved early in custom-built AI application projects to ensure sufficient time exists, resources are planned and expectations are managed for adequate security controls.

This is one of its top cybersecurity predictions, and Gartner recommends cybersecurity leaders factor these predictions into their security strategies over the next two years.

 

By 2028, more than 50% of enterprises will use AI security platforms to secure third-party AI service usage and protect custom-built AI applications.

AI security platforms give organisations a unified way to manage the new risks associated with rapid AI adoption, such as prompt injection, data misuse and more.

Centralising visibility and control, these platforms help CISOs enforce use policies, monitor AI activity and apply consistent security guardrails across third-party and custom AI applications.

Security leaders must evaluate AI security platforms to ensure they can secure both forms of applications.

 

Through 2030, 33% of IT work will be spent remediating AI data debt to secure AI.

Most organisations’ data is not AI‑ready, with poorly secured and unstructured data a major barrier to AI adoption.

In response, cybersecurity leaders are expanding data loss prevention to monitor and restrict data flows triggered by GenAI and agentic AI data access events and requests.

Gartner recommends they collaborate with data and analytics and AI leaders to define a structured program of data discovery, assessment and access control remediation.

 

Through 2027, manual AI compliance processes will expose 75% of regulated organisations to fines exceeding 5% of their global revenue.
Despite the distinct approaches to regulation globally, AI regulations converge on a universal demand for a systematic risk management approach.

Even if CISOs can keep ahead of security, privacy and cyber risk management regulations and standards, new regulations covering AI safety call everything into question.

For greater success, Gartner recommends establishing cyber governance risk and compliance, and enabling compliance through technology.

 

By 2027, 30% of organisations will require comprehensive sovereignty of their cloud security controls to address continued geopolitical turmoil.

Geopolitical turmoil and local regulations are creating untenable data risks, which require many organisations to make sovereignty a key part of their cyber resilience approach.

This will necessitate changes in vendor selection for cloud-tethered offerings and prioritization efforts as geopatriation requirements intensify.

Cybersecurity leaders must play an active role in defining organisational sovereignty requirements, including those required by local regulations.

 

By 2028, 70% of CISOs will use identity visibility and intelligence capabilities to shrink the IAM attack surface, reducing the risks of credential compromise.

Identity has become a primary attack surface as organisations struggle to manage the rapid growth and complexity of human and machine identities.

This leaves visibility gaps left by isolated identity and access management (IAM) tools and increases the risk of misconfigurations.

Gartner recommends these blind spots be addressed by integrating unified, AI‑powered identity visibility and intelligence platforms to improve detection and remediation.