The systems designed to verify identity and secure financial transactions are rapidly becoming the weakest link in the fight against fraud, as new data reveals the scale of AI-driven impersonation across Southern Africa.

According to the Smile ID 2026 Digital Identity Fraud Report, nearly 87% of rejected biometric verification attempts in the region are now linked to AI-assisted impersonation and spoofing, highlighting a dramatic shift in how fraud is executed and scaled.

For South African businesses and financial institutions, the implication is stark: the very tools relied upon to establish trust – facial recognition, voice verification, and digital identity checks are now being systematically exploited.

“Biometric verification was designed to confirm that a person is who they say they are. It was never designed to confirm that an interaction is authentic of AI,” says Matthew Renirie, co-founder of Certified AI Access. “What we are seeing now is not just an increase in fraud, it’s a fundamental shift as the control layer itself has become the vulnerability.”

The report highlights a broader evolution in fraud patterns across the region.

“Fraud is no longer about breaking into systems; it’s about becoming someone else. We’re seeing AI-generated faces and cloned voices pass biometric checks, synthetic identities reused at scale, and attacks moving beyond onboarding into accounts, transactions, and dispute processes,” he adds.

“This is the rise of a synthetic identity economy that is structured, repeatable, and industrialised.”

Renirie points out that fraud has effectively become a business model. “AI has collapsed the cost of deception,” he adds. “What used to take skill, time, and coordination can now be executed instantly, repeatedly and at scale.”

While many organisations continue to invest heavily in verification tools and cybersecurity systems, experts warn that the challenge is no longer purely technical.

Professor Clifford Shearing, an authority on governance and security, argues that the rise of AI-enabled fraud reflects a deeper structural issue.

“We are seeing the limits of governance models that rely on static controls in a dynamic threat environment,” says Shearing. “Systems designed to verify identity once are inherently vulnerable to manipulation over time.”

“This is not simply about better technology, but rather it requires a shift toward continuous oversight, adaptive systems, and new forms of institutional accountability,” he says.

“Traditional fraud systems are built to verify once, apply rules, and respond after the damage is done. AI-driven fraud doesn’t follow those rules, it adapts in real time, behaves like a legitimate user and moves straight through static controls.”

That’s the gap and it’s widening fast Renirie adds: “Most organisations are still relying on systems designed for a previous generation of threats. They are verifying identity once, and then assuming trust persists. That assumption no longer holds.”

A new category of defence that is emerging cautions, one that moves beyond verification toward continuous validation of digital interactions.

“Certified AI Access describes this as trust infrastructure, a layer that continuously analyses whether interactions are real, manipulated or synthetic,” Renirie explains “The future of fraud prevention is not about stronger gates at entry, it’s about continuously assessing trust across every interaction; voice, video, text, and behaviour, all in real time.”

Global estimates suggest AI-driven fraud could cost businesses up to $40-billion annually within the next two years. Beyond financial loss, Renirie says that the greater risk may be the erosion of trust in digital systems themselves.

“If organisations cannot reliably distinguish between real and synthetic interactions, the entire foundation of digital commerce is at risk,” he adds. “The question is no longer whether fraud will happen but whether institutions are equipped to recognise it when it does.”