Last weekend, at a typical South African braai (barbeque), I found myself in a heated conversation with someone highly educated—yet passionately defending a piece of Russian propaganda that had already been widely debunked, writes Anna Collard, senior vice-president: content strategy and evangelist at KnowBe4 Africa.

It was unsettling. The conversation quickly became irrational, emotional, and very uncomfortable. That moment crystallised something for me: we’re no longer just approaching an era where truth is under threat – we’re already living in it.

A reality where falsity feels familiar, and information is weaponised to polarize societies and manipulate our belief systems. And now, with the democratisation of AI tools like deepfakes, anyone with enough intent can impersonate authority, generate convincing narratives, and erode trust – at scale.

 

The Evolution of Disinformation

The 2024 KnowBe4 Political Disinformation in Africa Survey revealed a striking contradiction: while 84% of respondents use social media as their main news source, 80% admit that most fake news originates there. Despite this, 58% have never received any training on identifying misinformation​.

This confidence gap echoes findings in the Africa Cybersecurity & Awareness 2025 Report, where 83% of respondents said they’d recognise a security threat if they saw one—yet 37% had fallen for fake news or disinformation, and 35% had lost money due to a scam.

What’s going wrong? It’s not a lack of intelligence – it’s psychology.

 

The Psychology of Believing the Untrue

Humans are not rational processors of information; we’re emotional, biased, and wired to believe things that feel easy and familiar. Disinformation campaigns – whether political or criminal – exploit this.

  1. The Illusory Truth Effect: The easier something is to process, the more likely we are to believe it – even if it’s false (Unkelbach et al, 2019). Fake content often uses bold headlines, simple language, and dramatic visuals that “feel” true.
  2. The Mere Exposure Effect: The more often we see something, the more we tend to like or accept it – regardless of its accuracy (Zajonc, 1968). Repetition breeds believability.
  3. Confirmation Bias: We’re more likely to believe and even share false information when it aligns with our values or beliefs.

A recent example is the viral deepfake image of Hurricane Helena shared across social media. Despite fact-checkers clearly identifying it as fake, the post continued to spread. Why? Because it resonated emotionally with users’ felt frustration and emotional frame of mind.

 

Deepfakes and State-Sponsored Deception

According to the Africa Centre for Strategic Studies, disinformation campaigns on the continent have nearly quadrupled since 2022.

Even more troubling: nearly 60% are state-sponsored, often aiming to destabilise democracies and economies.

The rise of AI-assisted manipulation adds fuel to this fire. Deepfakes now allow anyone to fabricate video or audio that’s nearly indistinguishable from the real thing.

 

Why This Matters for Business

This isn’t just about national security or political manipulation — it’s about corporate survival too. Today’s attackers don’t need to breach your firewall. They can trick your people. This has already led to corporate-level losses, like the Hong Kong finance employee tricked into transferring over $25-million during a fake video call with deepfaked “executives”.

These corporate disinformation or narrative based attack can also result in:

  • Fake press releases can tank your stock.
  • Deepfaked CEOs can authorise wire transfers.
  • Viral falsehoods can ruin reputations before PR even logs in.

The WEF’s 2024 Global Risk Report named misinformation and disinformation as the top global risk, surpassing even climate and geopolitical instability. That’s a red flag businesses cannot ignore.

The convergence of state-sponsored disinformation, AI-enabled fraud, and employee overconfidence creates a perfect storm. Combating this new frontier of cyber risk requires more than just better firewalls. It demands informed minds, digital humility, and resilient cultures.

 

Building Cognitive Resilience

What can be done? While AI-empowered defenses can help improve detection capabilities, technology alone won’t save us. Organisations must also build cognitive immunity—the ability for employees to discern, verify, and challenge what they see and hear.

  • Adopt a Zero Trust Mindset – Everywhere: Just as systems don’t trust a device or user by default, people should treat information the same way, with a healthy dose of scepticism. Encourage employees to verify headlines, validate sources, and challenge urgency or emotional manipulation – even when it looks or sounds familiar.
  • Introduce Digital Mindfulness Training: Train employees to pause, reflect, and evaluate before they click, share, or respond. This awareness helps build cognitive resilience – especially against emotionally manipulative or repetitive content designed to bypass critical thinking. Educate on deepfakes, synthetic media, AI impersonation, and narrative manipulation. Build understanding of how human psychology is exploited – not just technology.
  • Treat Disinformation Like a Threat Vector
    Monitor for fake press releases, viral social media posts, or impersonation attempts targeting your brand, leaders, or employees. Include reputational risk in your incident response plans.

The battle against disinformation isn’t just a technical one – it’s psychological. In a world where anything can be faked, the ability to pause, think clearly, and question intelligently is a vital layer of security. Truth has become a moving target. In this new era, clarity is a skill that we need to hone.