For someone not working in cybersecurity, the most high-profile case of cloud hacking is, arguably, “celebgate” – or the 2014 breach of several device-storage accounts that saw hundreds of personal photos of celebrities stolen and published online.
For cybersecurity professionals, though, this represents a minute ingress compared to, say, the infamous 2013 breach compromising as many as one billion user accounts, or the 2019 “cloud hopper” case which saw cyberattackers (allegedly involving state actors) access reams of data including intellectual property from some of the world’s largest companies.
The cost and impact of cybercrime is climbing by around 15% every year, according to a 2020 report in Cybercrime Magazine, and is expected to cost $10,5-trillion globally by 2025. This makes cybercrime, they argue, more profitable than the global illegal drug trade.
The real costs are far greater though, both broadly and to individual companies, as these funds represent lost investment and innovation, and companies increasingly face stringent fines for personal data losses under regulations such as the General Data Protection Regulation (EU GDPR) and Protection of Personal Information (PoPI) Act.
The human layer
All it takes is a chink in the armour, warns John Ward, SME – Cloud Business, Africa at Fortinet, and this can be through human error, misconfiguration, permissions-based or sheer brute force attacks. This is why he likens solid data security strategy to a South African staple – the humble braaied onion.
“The outer edge gets the most heat through the tinfoil when you’re cooking it,” says John Ward, SME – cloud business: Africa at Fortinet. “At the centre is the soft part – valuable data – and you have to have multiple layers of protection around that sweet, sweet data. Hacked or – worse – lost, our data is incredibly valuable. This is why you need prevention tactics, and training on human error and how this can be exploited.”
User education – training on cyber threats and the kinds of tactics used by bad actors – is a key protection layer, and one that is particularly valuable because people are inherently fallible, Ward says. The users of your systems are vulnerable to phishing, spear-phishing and social-engineering attacks. Even if your security professionals are “jaded and grumpy”, Ward jokes, the outgoing nature of your sales staff (as just one example), their keenness to establish relationships, to be helpful, has been exploited since time immemorial.
“And what about a systems administrator who is having a bad day, who makes a simple error in a moment of distraction?”, Ward asks. These kinds of lapses are so relatable, but they also render the “human layer” susceptible to breaches.
This is why, Ward offers, cloud should be seen as an extension of the systems we have always used. In definitional terms, it is scalable and highly agile, accessible by internet technologies. There are standards that apply, such as usage per hour, availability, and so on. This is why cloud can be a boon to business. “But,” Ward cautions, “using cloud shouldn’t make you feel more comfortable than using your own server and network.”
Strategies, tactics, and responsibility
Another important thing to realise is that “shifting to the cloud” doesn’t absolve you of responsibility for your part in the cybersecurity conversation. Most cloud providers, in fact, work on a shared responsibility model, with customers retaining responsibility – for example – for defining who has access to their cloud environments, delineating roles and, ultimately, ensuring that their precious data is secure.
“Password’s standards are important, of course, but a password is part of the human layer or interface. So, we must also verify that users are who they appear to be, that they aren’t a stolen identity or the result of shoulder surfing. Role-based access would definitely be a preventative measure, as is two-factor or multi-factor authentication (2FA and MFA). Still, it is important to ensure these avenues of control do not become prohibitively convoluted, he continues. They must relatively intuitive so that users don’t push back or see them as a barrier to getting their work done.
Then, whether it is inside your HQ, or out in the cloud, Ward explains, data is always in one of three states: in use, at rest, and in motion. With each of these, Ward says, there are vulnerabilities that need to be addressed – and here again, taking a layered approach to security is what he advocates.
“In order to both enable your workforce and protect the organization,” Ward says, “you need to ensure that the right data is given to the right people at the right time – that’s data in use”. This is where a security framework like “zero trust” can be appropriate as this requires all users, devices, applications and services – internal and external – to be authenticated, authorised and constantly verified.
Other tactics in this layered approach, used in combination or in discrete instances, include perimeter security, intrusion detection and prevention systems (IDS/IPS) that can identify anomalies, and sandboxing that enables the identification of threats being seen for the very first time. The key in all of this is having real-time threat intelligence being continually fed into your security infrastructure so that it’s always up-to-date with the latest malware, attack techniques, and attacker profiles. An ever-evolving threat landscape can only be addressed with an ever-evolving cyber defence.
Standards in place, and beyond standards
Data at rest includes “back up in the cloud” or storage. Ward warns here that, firstly, a backup needs to be a backup – meaning that best practice requires keeping data in multiple places, and not just “cutting and pasting” off a machine – and, secondly, that cloud storage is only as safe as the policies, protocols, and standards that are applied. Correctly configuring access is key. Equally critical is encrypting data at rest.
Too often, he cautions, clients are in a rush to get a system up and running, and forget things like consultation with the group risk department who have standards for, for example, encryption. Cloud environments need to be proactively and continuously monitored to identify any data that is not encrypted or may be mistakenly publicly readable.
Finally, there’s data in motion. “The encryption you use will be determined by what data is going to be transmitted. Data is very vulnerable that when it is in motion,” Wards says. “There are hundreds of millions of unencrypted emails flying about every day.
Stratified strategy
The bottom line, Ward says, is that none of the security or encryption tools are completely impenetrable, especially as the technology deployed against them becomes increasingly sophisticated – as we are seeing with the use of quantum computing in this context.
“The final layer of the onion, so to speak, is monitoring and the capability of detecting when things aren’t going as expected, and then – vitally – the means to automatically act on that info.” This is why artificial intelligence (AI) in security is coming to the fore, Ward says. “AI is being used in determining the behaviour of a bad actor versus a good actor while the data is in use. Our AI systems are diagnosing potential threats and proactively improving security. AI can quickly assess what a user’s previous access encompassed, and determine that suddenly the user is asking for information that they have never had before.” AI, along with automation, can then proactively work to limit access and tighten the controls in place.
As we increasingly rely on cloud environments and our world becomes ever more digital, all these defences are coming together to ensure that we are the ones left to enjoy the rewards of that sweet data core, and not the bad actors.