The use of AI to create voice clones or deepfakes of people talking on video – is gaining ground. While some of the examples are harmless, such as the recent photos of Pope Francis wearing a puffy designer jacket, others pose immense risks to individuals.

“Current fears around deepfakes include threat actors using voice clones of famous celebrities or political leaders to commit fraud. For example a celebrity could be impersonated and ask for donations to support a fake cause, or mobilise supporters around dangerous or harmful ideas,” says Brian Pinnock, vice-president: sales engineering for EMEA at Mimecast.

In one recent example, a criminal used deepfake audio in a vishing (voice phishing) attack, where he cloned a young girl’s voice and demanded a ransom from her mother for her supposed kidnapping. The audio was so convincing that the mother never doubted it was authentic, until a friend called to say the girl was safely at their home.

The risks of AI tools creating convincing audio deepfakes extends to the business realm too. Cybercriminals can search for executives with a strong public profile – such as a CEO that is often interviewed on TV or radio or regularly speaks at conferences. Additionally, many people these days post videos of themselves on LinkedIn, Instagram or TikTok – so anyone could potentially be impersonated.

Criminals scrape these platforms and then use AI tools, video footage and recordings of their voices to create audio and video deepfakes that are nearly indistinguishable from the real thing. The convincing deepfakes can then be used on collaboration platforms to trick people into thinking they are interacting with a legitimate company representative or a high-ranking executive.

Deepfakes also open the door to effective asynchronous email attacks, for example by embedding a voice note or video in the email which is followed up with a spoofed email to make the request – for example making an urgent payment or sharing sensitive company information – more believable. When the email impersonates a high-ranking executive and includes an attachment with a deepfake voicemail, it creates a hugely convincing cyberattack that would almost certainly trick the recipient.

“AI tools will only grow in power and accuracy as the technology matures, creating growing risks as threat actors refine their attack methods over the coming years,” warns Pinnock. “For consumers, this will require practicing safe cyber hygiene and maintaining high levels of vigilance. For businesses that are typically the preferred targets of cyberattacks due to the greater rewards on offer, defending against deepfakes and other AI-enhanced cyber threats will require significant bolstering of defences.”

To protect against impersonation attacks such as audio or video deepfakes, Pinnock recommends organisations implement layered security solutions. This should include email security that integrates into a broader cybersecurity ecosystem, regular awareness training and AI-powered detection.

“Nearly all cyberattacks utilise email in some way, so having powerful and effective security controls for company email is essential. Understanding common attack types also helps people avoid risky behaviour, which makes regular, effective and relatable security awareness training a must-have for every organisation. In addition, AI-powered tools can provide a vital extra layer of security.

“An example is warning users of email addresses that could potentially be suspicious, based on factors like whether anyone in the organisation has ever engaged with the sender or if the domain is newly created. This helps employees make an informed decision on whether to act on an email. When combined, these security measures will help ensure that all employees in an organisation are able to work protected.”