Artificial intelligence (AI) is reshaping the business landscape in South Africa, unlocking new efficiencies and enhancing customer experiences across sectors.
But while much of the focus has been on the benefits, we must not ignore the risks, writes Nic Laschinger, chief technology officer of Euphoria Telecom.
One of the most urgent and underappreciated threats is the use of AI in voice fraud.
Until recently, voice-based scams were fairly unsophisticated. A scammer would use scripted robocalls or harvest victim details from social media platforms and then call them pretending to be a legitimate institution like a bank. Current scams involve someone claiming to be from an institution’s fraud department calling to alert you to an illegitimate transaction on your account and asking you to give them an OTP to reverse it. This OTP is used by the scammer to access your account and steal money.
Today, fraudsters are using AI to clone voices with remarkable accuracy. These tools can replicate a person’s tone, accent, and speech patterns. These can be pulled from voice notes, podcasts, social media clips or video calls. Once a voice is cloned, it can be used to impersonate someone in a convincing and highly manipulative way.
The implications for South African businesses and individuals are serious. Imagine an employee receiving a panicked call from a familiar executive, authorising an urgent payment or data release. Would they question it?
Globally, the financial damage is mounting and South Africa is not immune. In 2024, a local cryptocurrency exchange thwarted a fraud attempt involving a WhatsApp voice note from an AI-cloned executive. The attacker’s goal was to authorise a wallet transfer, but internal controls flagged the request as suspicious before damage was done.
In response to the rise in such incidents, the Financial Sector Conduct Authority (FSCA) has issued numerous warnings about deepfakes, encouraging businesses and consumers to be vigilant in the face of scams using deepfake video imagery of public figures to promote investments. These range from politicians like president Cyril Ramaphosa to celebrities like Leanne Manas and CEOs and directors of local businesses.
We’re also seeing a rise in personal scams. Criminals have started using AI-generated voices to impersonate children or relatives in distress, targeting parents or grandparents for quick money transfers. These scams prey not only on digital gaps, but on human emotion.
This is not tomorrow’s problem – it’s happening now. And South Africa, where mobile voice communication is still widely used for business and personal interaction, is particularly exposed. Our trust in the spoken word is a cultural asset – but it’s also becoming a vulnerability.
The financial sector is especially at risk. Some local banks have invested heavily in voice biometric authentication. But if a cloned voice can trick that system, it could open the door to account breaches and identity theft. Voice, once considered a secure form of authentication, is no longer enough on its own.
What should South African businesses be doing?
First, awareness is key. Employees at every level – from frontline staff to the C-suite – need to understand how voice fraud works, and how to verify requests that seem urgent or unusual, even if they appear to come from someone senior. Simple protocols like calling back on a known number or confirming via a different channel could prevent major losses.
Second, organisations must strengthen their identity verification methods. Multi-factor authentication that combines voice with something the user knows (like a PIN) or has (like a token or app) is now essential. Financial institutions, in particular, should invest in fraud detection tools that can analyse not just what is being said, but how – flagging subtle discrepancies in cadence or metadata.
Third, regulators and the tech sector need to keep pace. South Africa is beginning to explore AI regulation, and voice fraud must be part of that conversation. We need ethical standards for voice synthesis tools, clearer legal consequences for misuse, and public-private collaboration to track and prevent abuse.
Finally, the technology community must take responsibility. AI voice models should come with built-in safeguards – such as traceable watermarks or usage restrictions – to prevent misuse. Innovation must be matched with accountability.
We’re entering an era where hearing is no longer believing. In a high-trust business culture like ours, that shift has profound implications. Protecting voice identity must now be treated with the same seriousness as safeguarding passwords, PINs or company data.
AI is not the enemy. But like any powerful tool, it can be used to build – or to deceive. South Africa’s business community must act now to ensure we’re ready for both possibilities.