AI is driving down the cost of attacks while increasing the value of defence, but where does this race for dominance end?
Richard Frost, head of technology solutions and consulting at Armata Cyber Security, attempts to answer the question.
The cost of attacks is lowering faster than the cost of defence, and this is forcing a structural reset in how cyber-risk is priced, insured and managed.
A recent study by Wiz Research and Irregular recently found that AI agents were capable of undertaking sophisticated offensive attacks for less than $50 (the cost of the LLM). Traditionally, these levels could only have been achieved with a human team costing around $100 000.
On the flip side, well-implemented, defensive AI is starting to show measurable and material savings in breach costs and SOC labour, which is widening the gap between the AI haves and the have nots.
The result? A radical change in cybersecurity economics as threat capabilities become cheaper at scale, lowering the skill barrier and delivering higher yield per campaign.
AI tools are now the frontline of attack, handling reconnaissance, exploit crafting, and phishing content so threat actors can run more complex campaigns and increase their total attack volumes.
The increase in hyper-personalised phishing and deepfake threats substantially increases click and conversation rates, which makes each campaign economically more attractive than the sluggish, traditional approaches of the past.
AI has changed where the threats take place and trust has become an asset that’s prized by both sides. Identity itself is under threat as AI’s ability to mimic is used to gain access to the organisation. One identity stolen and copied can create absolute chaos within the business.
On the defensive side of the fence, the cost factor is equally relevant. In the IBM 2025 Cost of a Data Breach Report, the average cost of a breach has dropped from $4,88-million in 2024 to $4.44 million in 2025.
Companies using AI and security automation have reported a decrease in breach costs of between 32% to 43% compared with those with no AI, and with shorter detection and containment times.
According to Harvard Business Review, the SOC is also benefitting from the intervention of AI. Agents are able to fill the gaps left by the lack of skilled talent (and the exhausted teams struggling with persistent security fatigue), cutting the amount of time spent on finding alerts, triaging them, and ensuring that response times are managed more effectively.
Then, of course, there’s the impact of AI on premiums and risk models. The World Economic Forum 2026 Global Cybersecurity Outlook pointed out how AI-enabled threats and fraud are widening the cyber inequity gap.
Companies can’t afford the resources and expertise required to combat the threats, while insurance companies are having to revisit their assumptions around loss frequency, severity and concentration.
The AI arms race is also, says McKinsey, creating a $2-trillion opportunity – companies can prioritise the securing of AI and the development of AI security platforms as distinct growth segments with model security, data pipeline protection and AI governance increasingly critical components of cyber spend and investment.
This same AI race is changing the underlying economics of computing itself. AI workloads are dependent on memory and GPU capacity, and demand for both has accelerated far beyond supply chain capabilities.
The result is that AI is placing immense strain on supply with the cost of GPUs, memory, storage and power increasing exponentially.
There’s a growing contradiction – the technology designed to increase efficiencies and reduce costs is making the foundational layers of computing more expensive.
Hardware upgrades that were once felt as incremental are becoming disproportionately expensive and impacting on budgets and expenditure. As AI adoption scales, the pressure on GPU and memory supply will intensify and turn compute into a contested resource rather than a commodity.
And how does this impact security? The AI arms race is also raising the cost of participation which makes it challenging for companies to secure the infrastructure needed to run AI-driven security models.
Companies are facing an uphill cost battle to ensure their systems are capable of detecting faster, responding sooner and operating at scale, and those who cannot are finding themselves increasingly exposed.
The growing divide is not just between attackers and defenders, but between those who can afford to operate in an AI-driven security environment and those who can’t.
Resolving this will come down to working with security firms that have an understanding of the market and AI-powered tools that can address these gaps, without putting budgets in crisis.