Cyber insurance was created for a world where most incidents looked familiar. It used to be all about ransomware, data breaches, business email compromises, or even outages.
By Yaron Assabi, founder of eInsurer
And, while many security incidents will continue to start that way in 2026, the difference lies in their execution and the speed at which they will move through the corporate network.
Generative AI has removed much of the friction typically associated with cybercrime. In 2023, SlashNext reported a 1,265% increase in malicious phishing messages following the launch of ChatGPT. In many ways, AI has democratised cybercrime by giving even non-technical users access to tools that can be used for nefarious purposes.
It is in this space where executives often make the wrong assumption. They hear “AI attacks” and conclude that cyber insurance is now irrelevant. The more accurate takeaway is that cyber insurance still matters, but policies and underwriting must reflect this dynamic environment.
Understanding the risks
The first risk lies in policy wording. A traditional cyber policy can be strong on terms like breach response costs, forensics, notification, and business interruption, yet be unclear on AI-shaped scenarios such as deepfake-enabled vendor fraud, synthetic identities, data poisoning, or compromise of an AI system that sits inside the supply chain. Insurers are already trying to tighten definitions and reduce ambiguity, including explicit AI endorsements and clarifications.
The second risk is speed. Incident costs are rising, and response windows are shrinking. IBM’s 2024 Cost of a Data Breach report put the global average cost at $4.88 million, which is a reminder that the financial blast radius is not theoretical. When an incident occurs through automated, AI-enabled social engineering, delays in detection, attribution, and claim readiness are where companies lose twice. First, in operational damage, and then the challenges that invariably occur at the claim stage.
The third risk is accumulation. Cyber risk is not “stationary” in the way many traditional insurance threats are, and the systemic effects of shared platforms and connected vendors make loss correlation a real concern. This is one reason cyber underwriting is increasingly shifting from static questionnaires toward more dynamic, posture-linked assessments.
Making the decision
So, can cyber insurance protect you from AI-powered criminals? Yes, but only if you treat it as an executive risk instrument.
Here are some of the questions that change outcomes in the real world. Do you have a clear map of where AI is used internally and by key third parties; does your cyber policy address AI-linked attack paths explicitly or leave grey zones; what exclusions could bite if governance controls are weak or tool usage is unauthorised; what evidence would your insurer require in an AI-linked event and can you produce it quickly; and what pre-incident risk services does the insurer provide to reduce the chance of a loss?
The market is heading towards insurance that behaves more like digital infrastructure, with modular cover that can adapt as exposure changes. In an AI era, cyber insurance is not “protection against hackers”. It is a negotiated operating model for resilience. If you do not update that model, you will discover the gaps only after the incident.