While interference in elections, both domestic and foreign, is far from a joke, it is also not a new phenomenon. In fact, election hacking comes in many guises and has been taking place in one form or another since the 1940s.
By Grant Hamilton, country manager of Checkpoint
In today’s age of cyber warfare, though, it has become arguably the most powerful weapon a nation-state threat actor has to attack another nation-state.
The subtle art of manipulation
Although not a new phenomenon, fake news presents a growing threat for countries across the globe. Disrupting conversations and impacting democratic processes, even threating elections.
As seen in part of the notorious alleged Russian meddling campaign in the 2016 US elections, the most high profile method for hacking an election involves the tampering of computerized voting systems. Indeed, this is the method many would recognize due to heavy reporting on the topic.
However, there are more subtle, yet no less damaging, approaches for a foreign or domestic actor to bend, break or reinvent the rules to suit their own liking without needing to go anywhere near a voting machine.
Since 2016, digital propaganda and misinformation campaigns have become increasingly sophisticated. This is partially because foreign actors have taken a greater interest in exploiting the vulnerabilities of international voting systems, and partially because social media is particularly vulnerable to organised misinformation campaigns, as seen by the controversy surrounding the political consulting firm, Cambridge Analytica, which became a dominant player in these efforts.
Many may believe that this is due to the thousands of fake accounts on Twitter or Facebook that are tilting the results. This is only partly true.
Take the upcoming Israeli elections for example. According to some estimates, it requires 100 000 people to vote for a particular political party there to achieve just one parliamentary candidate seat. Indeed, it’s actually difficult to persuade voters to even go out and vote at all, let alone vote for a particular candidate. It’s also an expensive approach, as seen by the $5-billion spent on the 2018 US mid-term elections, considered to be the most expensive elections ever held.
The reason for this is that in the weeks leading up to Election Day, voters are inundated with political messages, and the resulting barrage limits the impact of any one individual message, good or bad.
Instead, a more subtle, yet more efficient and powerful, approach is to influence the news narrative itself. By doing so, outside forces can have a significant impact on setting the agenda by amplifying specific stories in the news cycle, via social media, and thus sway the political outcome.
It’s all in the timing
Voters often come with already preconceived ideas and cognitive biases as to which way they are planning to vote. For this reason, according to research into political communication, the media is not so effective in telling people what to think. They are, however, very successful in telling readers and viewers what to think about.
Furthermore, one cognitive bias that can be influenced is the recency bias, the phenomenon that the headlines dominating the news in the weeks leading up to Election Day becomes the issues that voters care about most. News stories focusing on migrants, for example, will be the key issues that voters can be swayed to focus on when choosing between a populist or non-populist electoral candidate.
Because of this, it is not necessarily bots posting thousands of messages about a particular issue that are most effective for hacking an election. Instead it is well timed scandals strategically placed at the right time before an election that could have the most impact by dominating the narrative. In this way, foreign bodies can clandestinely attempt to inject news, either true or false, into popular consciousness.
Fake it until you make it
As mentioned earlier, misinformation during elections is nothing new. But in the fifth generation of the cyber threat landscape, misinformation and fake news in general is now propagated in ever more sophisticated ways. Due to the development in technology, tools that began as crude, fake accounts on social media spamming unsophisticated messages en-masse, have now reached far greater levels of maturity.
At the heart of this development is artificial intelligence (AI), bringing an ability for a computer to adapt in real time and create content that is increasingly more believable and natural. So much so, that we have arrived at a point where these tools are being recognized by their authors as being too dangerous for release. Such is the case with GPT-2, an AI based machine that, based on machine-learning algorithms and big data, can generate highly coherent paragraphs of text, unsupervised.
Another example is Project Debator, a computer that is able to generate original and persuasive content in real time and in response to alternative arguments presented to it.
These tools are not only available for text content either. Milestones have also been reached in the creation of fake video content that is almost indistinguishable from the genuine article. When video is now the preferred medium for hundreds of millions worldwide, thanks primarily to smartphone usage, a threat actor using ‘Deep Fake’ videos can dictate the narrative by imposing the image of a political candidate into situations where they never in fact appeared. Yet by the time the truth and facts are finally uncovered, the damage is already done.
When these tools are combined with the viral potential that social media platforms offer, the results are chilling to consider. And while there are some who are challenging the current level of misinformation that floods our newsfeeds on an hourly basis, one can’t help but feel they are lagging behind the advances being made on the front line of today’s cyber threats.
One reason why this might be is that these tools lack the amount of data and visibility held by the social media platforms that disseminate this misleading and fake content. While it is currently a hot debate as to what extent these platforms are responsible for policing the content shared on them, what is clear is a need for cooperation and collaboration between them and third parties, whether governments or private companies, that have an interest in quelling the potential damage that can be done to users. Without such action, it is likely we will see more attacks on all stages of the voting process and the fallout from such damaging outside interference.
Conclusion
From Spain to Israel and Canada to South Africa, more than 90 regional or general elections are planned to take place in 2019, and the fear of foreign intervention looms large.
However, free elections are crucial to the functioning of democratic institutions. Cyber attacks on these institutions have therefore become a choice option to any state or non-state threat actor looking to sway their results or undermine democracy itself.
At the core of this option, though, whether it be attacking the computerized voting systems themselves or manipulating voters through misinformation, is technology. Accordingly, as with all forms of cyber attacks, the required response to threats that leverage technology to cause mass harm are solutions that leverage alternative, yet no less advanced, technology to prevent such damage.