South Africa leads the rest of Africa when it comes to cybercrime. In 2022, 230-million threats were detected in the country, well exceeding Morocco in second place with 71-million threats.

By Steven Kenny, architect and engineering program manager: EMEA at Axis Communications

South Africa also had the highest targeted ransomware and business email compromise attempts, and is home to the third-highest number of cybercrime victims worldwide at an annual cost of R2,2-billion.

Cybercrime is big business, and threat actors are deploying cutting-edge tools to carry out their attacks. Fortunately, cybersecurity is constantly evolving to meet the ever-changing needs of individuals and organisations, and to counter the threats they face.

Enterprises have access to and are now starting to leverage cutting-edge solutions that reinforce their security resilience. Artificial intelligence (AI), something that is influencing all spheres of business activity, can help secure enterprises’ growing surface area of attack, and identify and mitigate vulnerabilities without the need for additional human intervention.

As with any business change, part of deploying AI-driven solutions is having a robust strategy in place, one that considers the long-term feasibility and requirements of those solutions.

Threats of escalating severity

For many threat actors, cybercrime is a business like any other. As a result, they are inclined to adopt the latest trends and use the latest technologies to carry out their attacks. The various features of AI and machine learning (ML) that enterprises are starting to explore are the same features criminals are misusing.

There are several examples of this. For instance, generative AI tools such as ChatGPT and Google’s recently launched Bard can provide criminals with marketing messages for phishing emails. AI automation tools can be used to create automated interactions with a large pool of potential victims. Algorithms trained on personal data can be used to build profiles of victims and prioritised lists, minimising the resources needed to do so while increasing the accuracy of attacks.

However, the misuse of AI goes beyond straightforward phishing attempts using ChatGPT. AI-powered malware can leverage advanced techniques to evade detection by security software and use metamorphic mechanisms to change operations based on the environment they’re in.

Consider DeepLocker, an AI-powered malware developed by IBM research as an experiment. It conceals its intent until it reaches a specific victim, potentially infecting millions of systems without being detected. It is critical that enterprises stay one step ahead of malicious innovation like this, and they can do this by properly integrating AI-powered systems and countermeasures into their security strategies.

First responders

Having AI-enabled security systems requires an overhaul of organisations’ inner security workings. In other words, given the technological, legal, and ethical implications of those systems, companies need to provide adequate training and education for their security teams, as well as conduct due diligence with their respective IT suppliers and partners.

From there, the key factor is data. AI programmes can identify patterns, detect anomalies, and analyse vast amounts of data throughout an organisation’s network and infrastructure. This applies to infrastructure regardless of its scope and circumstance. Case in point, AI can be used to detect vulnerability in hybrid or remote environments where systems are decentralised.[1] These programmes serve as the “first responders” in countering any malicious activity, and they help organisations assume a more proactive, forward-looking risk posture.

AI is also a force for reducing organisations’ security workloads. For example, AI-powered automated patching can track and patch important software in real time and minimise potential exposure to threat actors.[2] Keep in mind, businesses should not become over-reliant on these systems, or leave them susceptible to data breaches. To avoid this, organisations must implement solid policies and guidelines regarding data access, monitoring, and analytics.

We need to embrace the future

According to Microsoft-IDC research, 39% of companies in South Africa plan to address security concerns by improving the automation of processes and integration of their technologies.[3] This is a step in the right direction, but it is only the beginning for many organisations and their efforts to overhaul their security setups.

AI represents a turning point in how we approach, among many other business functions, security. Its implementation may come with unanticipated consequences, but organisations need to be prepared to adopt it, lest they fall behind their competitors or only see its value too far down the road.