After ransomware, the next mean-spirited genie to escape the cybercrime bottle will most likely be deepfakes: audio and video material, maliciously created with the help of artificial intelligence (AI) for manipulation and fraud purposes.

By Chris Mayers, chief security architect at Citrix

What follows is thinking on why deepfakes are an imminent threat, and what to do about them.

IT security professionals have been debating the threat of deepfake audio and video material since 2017, when the first amateurish deepfakes appeared in the wild. The risk spectrum ranges from ruining people’s reputation (early deepfakes mocked or abused celebrities) to manipulating political debate (e.g. by publishing fake material immediately before elections to discredit political opponents), and on to defrauding businesses, or creating fake news to manipulate a company’s stock market evaluation.

Imagine the following: you are the head of a South African-based energy firm, and the CEO of your German parent company calls, asking you to immediately transfer €220,000 to a supplier in Hungary. You recognise the CEO’s voice, his manner of speaking, and his German accent, and he makes a convincing case for the urgency of the money transfer. What do you do? You transfer the money, right? Only to find out later that it was fraudsters who had used AI to imitate the voice of the German CEO. This isn’t a hypothetical case, but one that actually happened in the UK in March 2019.

Now fast-forward to 2021 and all the progress that compute power and AI algorithms have made in the last two years. The current combination of rapidly advancing deepfake technology and a well-organised, well-funded cybercrime scene make it very likely that deepfakes are approaching the inflection point that ransomware reached in 2016: progressing from a rather obscure attack vector (ransomware has been around for about 30 years) to a rapidly spreading wave of damaging attacks.

Sceptics might ask: why should cybercriminals take the trouble of creating elaborate deepfake audio or even video material to attack businesses, when simple phishing has proven effective enough to compromise company networks with ransomware? This is a valid point, but there are four aspects that lead to the inflection point scenario nonetheless: first, cybercriminals are known to quickly adopt the latest technology. Criminals have used fake e-mails to defraud businesses for decades (business e-mail compromise, BEC) – now they have a way to move from simple e-mails to powerful audio and video media, so they will definitely utilise it to expand their toolset.

Second, deepfake technology is progressing in huge steps (simply google “OpenAI Jukebox” to get a glimpse of the high speed of AI innovation). This means that deepfakes are harder to distinguish from real audio and video material with every passing month. It will continue to become easier for attackers to be successful this way, and they will find all the necessary tools on the internet, or on the darknet, if need be. In the near future, researchers will see a differentiated darknet supply chain for deepfake-related attack services – because that’s the way the cybercrime industry works today.

Third, while creating a credible deepfake attack – for example, a CEO video message sent to the finance department – might involve a lot of effort and expertise, the amount of money and time cybercriminals are willing to spend on an attack directly correlates to the prospective gain.

Consider the history of ransomware: initially, it targeted individuals, and used ordinary phishing e-mails to lure them; recently, however, ransomware gangs have focused on businesses – and some have even taken to so-called “big game hunting”. This means they spend weeks or even months on spearphishing, network reconnaissance, privilege escalation, data exfiltration, and ultimately the extortion campaign.

This is a whole new league of ransomware attack – with much more preparation and skills required, but potentially with a gigantic ransom if successful. Similarly, AI-based fraud takes much more time and effort than old-fashioned BEC, but if the expected profit is big enough, some threat actors will proceed this way.

And finally, like so many human endeavors, cybercrime moves in waves. It’s not like ransomware will go out of style anytime soon, but a certain kind of threat actor will soon want to wander off the beaten tracks of digital extortion. The reason: due to advances in defense and backup/recovery strategies and technologies, ransomware attacks are bound to become more difficult in upcoming years.

However, there is not yet any comparable defense strategy, let alone automated defense tools, against a perfectly-made fake video message of the CEO ordering someone from the finance department to wire amount X to account Y. Some specialist security companies have already started working on deepfake detection technology, and companies like Facebook have banned deepfakes from their platforms to eliminate targeted disinformation.

Still, there is a time window when an attack technology is innovative enough to fool people, and still young enough so there is a lack of defense mechanisms to ward them off. This window of opportunity is opening for deepfakes – not in a distant future, but now.

So what can companies do except wait for the arrival of anti-deepfake tools? There are several steps businesses can take right now to defend against deepfakes:

* Know your enemy: Make sure your security staff have this new threat on their radar and follow deepfake-related threat intelligence closely to be prepared for the looming inflection point.

* Have a plan: Establish incident response workflows and escalation procedures for this kind of attack. Remember: deepfakes utilise IT in the form of AI, but they are a business-level attack. So incident response will involve top management, IT security, finance, legal, and PR teams.

* Spread the word: Include deepfakes in security awareness campaigns to inform the workforce about this threat. Teach staff to take a step back whenever they have doubts about information they receive, even credible audio/video information. Start with targeted awareness trainings for high-risk individuals such as C-level management, middle management, and the finance department.

* Create channels: Establish workflows so that workers know how they can verify any information that arrives via a potential deepfake message. Just like two-factor authentication has become the de-facto standard for access security, double-checking critical information must become standard procedure as we approach the age of AI-generated misinformation. Also, there should be a backup channel: if verifying some information isn’t possible at the moment (attackers usually apply time pressure: “this is urgent!”), there must be an alternative mode of communications. Workers should be equipped with easy-to-use multi-channel communication tools they can effortlessly utilise even in times of time-pressure or crisis.

* Rethink corporate culture: Even with old-fashioned BEC, the catalyst for a successful attack often is that workers don’t dare to question orders they were – seemingly – given by their superiors. They worry that their boss might be mad if they asked for confirmation. So an effective first line of defense is to re-evaluate the corporate culture, and aim for a flatter hierarchy where it’s “not a big thing” to use common sense and call one’s superior to ask: “Did you really just video-message me to transfer amount X to account Y in obscure offshore country Z?”

The evil deepfake genie might escape the bottle of “not quite ready for prime-time” AI technology any time soon. Businesses need to be aware of this risk, and prepare for it, even when there is not yet any out-of-the-box “anti-deepfake solution” that they could swiftly implement. And yes: I really did just give you five tips on how to start tackling this looming risk right now, and if you want to get in touch with me to verify this information, I won’t be mad.