We are on the precipice of an unprecedented shift that is reshaping the cybersecurity industry and how organisations tread through the digital landscape, write Morné Louwrens, MD: advisory at Marsh Africa, and Prejlin Naidoo, digital partner at Oliver Wyman.

In 2024, companies saw greater adoption of artificial intelligence (AI) systems, with over three in five South African workers using generative AI tools frequently according to a report by the Oliver Wyman Forum.

However, workers may be using generative AI tools like ChatGPT to generate presentation decks from uploaded internal reports, and in the process, unwittingly publicising sensitive company data. As these rapidly developing AI tools are further integrated into business processes in 2025, greater risks will emerge.

Cybercriminals have caught on to AI’s potential to make their schemes more efficient and harder to thwart.

Added to this, the World Economic Forum’s 2025 Global Risks Report found that generative AI poses a “critical” misinformation and disinformation risk because it can be used to manipulate public opinion.

Security professionals can most effectively combat AI-assisted cybercrime by leveraging AI for threat detection and response. However, this will likely intensify the arms race between attackers and defenders. AI also offers a human resource advantage, as it can reduce skills gaps in a market where digital security experts are in short supply.

Human error is behind almost 70% of successful cyberattacks, making people the most vulnerable element in cybersecurity. As we move into 2025, organisations must prioritise strong AI governance policies and acceptable use guidelines to effectively manage this ever-present threat.

 

Cybercrime will be on the rise in 2025

Digital theft, fueled by AI advancements, is a burgeoning global business. Cybersecurity Ventures estimates that cybercrime netted hackers around the world $9,5-trillion in 2024, and this figure could grow by $1-trillion in 2025. As a “country,” cybercrime would rank as the world’s third-largest economy, trailing only the US and China.

Cybercrime is a significant issue in Africa as well.

In South Africa alone, R2,2-billion is lost annually, and Interpol estimates losses exceeding $4-billion across the continent, highlighting the urgent need for stronger security measures, as echoed in a Marsh report on cyber risk trends.

Ransomware is gaining momentum on the continent because hackers view Africa as a testing ground for novel tactics due to the perception that the continent is ill-prepared. Interpol shows how past attacks in Africa have focused on critical infrastructure like healthcare systems and government services – threatening public safety and economic stability.

Geopolitical uncertainty and distrust, fueled by misinformation and AI-amplified disinformation campaigns, will escalate targeted cyberattacks in 2025. The 2025 Global Risk Report states that misinformation and disinformation increase uncertainty and distrust. Education and awareness are crucial for African nations, as they are not immune to these threats that undermine national security.

 

AI-powered ransomware, phishing and harmful code

Malicious actors will use AI to automate different stages of ransomware attacks to execute large-scale campaigns that target multiple organisations at once. Machine learning algorithms can enable ransomware to evolve in real-time, by modifying its behaviour based on information gathered from the system it infiltrates. Traditional security measures will battle to detect and neutralise these threats.

AI-powered phishing attacks are becoming more sophisticated and personalised, leveraging social media and communication patterns to create convincing messages. This makes it increasingly difficult to detect malicious emails, even for well-trained individuals, as AI also reduces telltale errors like spelling mistakes and grammatical inconsistencies.

Since AI is making coding more accessible, hackers will also find it easier to create malicious code that can be added to a link in a phishing email. Malware like DeepLocker can use AI to hide its true nature and evade detection by remaining dormant until activated, allowing it to bypass traditional security measures. This AI-powered code can perpetrate extensive damage to corporate systems by leveraging AI for stealth and precision in attacks.

 

Our eyes cannot be trusted 

In 2024, a finance worker at a major Hong Kong firm was scammed into sending $25-million to fraudsters. This happened after he was tricked into attending a video call with what he thought were other team members who authorised the transaction. In reality, he unknowingly attended a meeting with AI-manipulated recreations of his colleagues. This attack was executed using deepfakes, or synthetic media created with AI and deep learning algorithms.

Deepfake tools can easily manipulate media and people. The U.S. Department of Homeland Security warns that deepfakes are a pervasive threat due to people’s inclination to believe what they see, and that synthetic media will increasingly be used for fraudulent purposes.

The misuse of deepfake technology has prompted demands for enhanced tools to identify AI-manipulated content. Sectors like financial services must proactively strengthen cybersecurity frameworks, prioritise advanced detection mechanisms, and cultivate user awareness to counter the increasingly complex cybersecurity threats anticipated in 2025.

 

Fighting AI threats with AI-enhanced defences

Organisations are using AI to improve their defenses. AI excels at processing and analysing vast amounts of data, making tasks like pattern recognition and behavioural analysis more efficient. This makes AI crucial in countering malicious AI use.

Financial services firms, including banks, are already using AI for security measures like facial recognition for user authentication. Other firms use AI to help coders fix vulnerabilities and to enhance risk assessment by analysing data from numerous investigations.

Financial services companies are also proactive in educational messaging that alerts customers to the latest fraudulent trends. This is crucial as people are the most vulnerable link in a cybersecurity value chain.

 

Preventing people from being the weakest link  

In 2025, navigating the evolving cybersecurity landscape will demand more than just advanced tools.  True resilience is achieved through strong collaboration between humans and AI, where technology strengthens defenses and human insight guides safe practices.

This holistic approach allows businesses to not only mitigate risks but also build a resilient foundation that will enable them to thrive amidst future challenges.