What colour hat is the hacker that just penetrated your organisation without your permission wearing? Grey? Black? White?

Hacking is often seen as a black-and-white issue: either you are a malicious hacker who breaks into systems for personal gain, or you are an ethical hacker who tests security for the greater good. What if you are a cybersecurity expert who hacks into a client’s network without their consent? Is that still ethical hacking, or is it crossing the line?

According to South African law, ethical hacking requires authorisation from the target. “There is no in between, you are either an ethical hacker or not,” says Stephen Osler, co-founder and business development director at Nclose.

He explains: “It doesn’t work to have a white hat penetrate a company without telling them that they are about to launch an attack. This moves the conversation into the black and grey-hat territory – where hackers find and report vulnerabilities in a network without permission. Usually, this type of hack ends up with the hackers demanding money to fix the problem or to reveal the problem.”

White-hat hacking aims to find and fix the flaws and problems in a customer’s system so that both the cybersecurity experts and the organisation can patch and catch vulnerabilities. Skilled hackers use techniques such as phishing, social engineering, security scanning and penetration testing to identify the weakest links in an organisation’s security chain. It’s a smart way to ensure that a company’s systems are robust and secure and to prevent costly mistakes at the hands of black-hat hackers.

“This is a completely different approach that ensures every part of a customer’s platform and business is secured,” says Osler. “When you suddenly go rogue and have a bunch of hackers trying to clamber into a company system without permission, that’s attacking that company and stepping straight into cybercriminal territory.

“There’s an approach where you have a red team of attackers and a blue team of defenders and the red team tries to breach a company’s defences,” says Osler. “Some cybersecurity experts think that telling the blue team about the attack defeats the purpose. They say that the real value is in testing how quickly they detect a cyber-incident. You don’t test efficiency if you warn people before testing them. But we believe that the right approach is to combine the two into an approach known as purple teaming.”

This combines the skills of both teams to help them learn from each other and build strong security skill sets that benefit both the organisation and the cybersecurity service provider. The blue team defends the network and challenges the red team to try harder to break in, while the red team looks for new ways to overcome the blue team’s defences. With this collaborative approach, everyone benefits and there’s no unauthorised hacking.

Osler concludes: “This is a far more effective way of maintaining skills, checking defences and building a company’s security than hacking without permission. That not only damages trust – the company feels violated rather than supported – but also raises questions about ethics, access to private company information, regulations and law that are too important to ignore. It’s better to use a collaborative approach that benefits everyone and that keeps the hacking hats as white as possible.”