As artificial intelligence continues its rapid move from experimentation into production environments, 2025 marked a turning point for how organisations approach AI adoption – particularly in the context of cybersecurity, governance and enterprise risk.

According to Kevin Halkerd, senior risk and security analyst at e4, the past year made one reality abundantly clear: AI maturity is no longer defined by technical capability alone, but by an organisation’s ability to control, govern and secure it.

“AI has accelerated both innovation and risk,” says Halkerd. “The organisations that are succeeding are not necessarily the ones with the most advanced models, but those that put governance, data discipline and accountability in place before they scale.”

 

From experimentation to enterprise accountability

Throughout 2025, AI featured prominently across industries, from automated decision-making and analytics to security monitoring and process optimisation.

However, Halkerd notes that many organisations underestimated the operational and security implications of deploying AI at scale.

“AI doesn’t arrive in isolation,” he explains. “It touches data pipelines, identity systems, cloud environments and software supply chains. If those foundations aren’t visible, auditable and secure, AI simply amplifies existing weaknesses.”

This reality has pushed cybersecurity and risk teams closer to the centre of AI strategy discussions. Rather than being consulted after deployment, security leaders are increasingly involved at the design and architecture stage.

“We’re seeing a shift from ‘secure it later’ to ‘secure by design’. That’s a significant maturity step,” Halkerd says.

 

Security as a strategic enabler, not a blocker

Contrary to the long-held perception that security slows innovation, Halkerd believes 2025 reframed the relationship between cybersecurity and speed.

“Security maturity is not static. The organisations that maintain trust over time are those that treat governance as a continuous discipline,” he notes. “Controls must evolve as quickly as the threats and technologies they’re designed to manage.

“When security is embedded early, it actually gives leaders the confidence to move faster,” he says. “You can experiment, iterate and scale knowing there are guardrails in place.”

In practice, this means embedding controls such as DevSecOps pipelines, continuous monitoring, model auditability, and clear ownership of AI outputs. It also means understanding where data originates, how models are trained, and how decisions can be explained and challenged.

“Trusted innovation is the only kind that scales sustainably,” Halkerd adds. “Anything else eventually hits a wall – whether that’s regulatory, reputational or operational.”

 

AI governance meets perpetual volatility

Beyond AI itself, e4’s 2025 narrative placed strong emphasis on leadership, culture and organisational resilience. Halkerd says the year reinforced a broader truth for technology leaders: volatility is no longer episodic – it is structural.

“We’re no longer dealing with isolated disruptions,” he says. “We’re operating in a constant state of change – from evolving threat landscapes to shifting regulations and accelerating customer expectations. That demands a different kind of organisation.”

According to Halkerd, this environment elevates strategic agility and resilience from abstract concepts to core enterprise capabilities.

“The organisations that performed best weren’t the ones with the most rigid plans,” he explains. “They were the ones with the strongest muscles for sensing change, adapting quickly and making informed decisions under pressure.”

 

Culture and capability, not just tools

Halkerd cautions that while emerging technologies can amplify agility and security, they cannot replace the cultural and operational discipline required to sustain them.

“Technology enables resilience, but culture determines whether it actually works,” he says. “Without clear decision rights, cross-functional collaboration and accountability, even the best tools simply digitise old problems.”

He adds that leaders must increasingly balance innovation with responsibility – particularly as AI systems influence customer outcomes, financial decisions and regulatory exposure.

 

Looking ahead

As AI becomes more deeply embedded in enterprise systems, Halkerd believes the next phase of digital transformation will be defined less by speed of adoption and more by quality of execution.

“The winners will be those who treat trust as a design principle, not a compliance exercise,” he concludes. “In an AI-driven world, control, transparency and resilience are not constraints on growth – they are what make growth possible.”