While just under half of employees surveyed in South Africa (42%) said they could tell a deepfake from a real image, only 21% could actually distinguish a real image from an AI-generated one in a test, says Kaspersky.

This means that organisations are vulnerable to such scams, with cybercriminals using generative AI imagery in several ways for illegal activities. They can use deepfakes to create fake videos or images that can be used to defraud individuals or organisations.

For instance, cybercriminals can create a fake video of a CEO requesting a wire transfer or authorising a payment which can be used to steal corporate funds. Compromising videos or images of individuals can be created which can be used to extort money or information from them. Cybercriminals can also use deepfakes to spread false information or manipulate public opinion – 55% of employees surveyed in South Africa believe their company can lose money because of deepfakes.

“Even though many employees claimed that they could spot a deepfake, our research showed that only half of them could actually do it,” says Dmitry Anikin, senior data scientist at Kaspersky. “It is quite common for users to overestimate their digital skills; for organisations this means vulnerabilities in their human firewall and potential cyber risks – to infrastructure, funds, and products.

“Continuous monitoring of the dark web resources provides valuable insights into the deepfake industry allowing researchers to track the latest trends and activities of threat actors in this space,” he adds. “This monitoring is a critical component of deepfake research which helps to improve our understanding of the evolving threat landscape. Kaspersky’s Digital Footprint Intelligence service includes such monitoring to help its customers stay ahead of the curve when it comes to deepfake-related threats.”