Artificial intelligence is no longer a niche tool. It sits inside core workflows, decision systems, and customer experiences.
For boards in South Africa and beyond, that raises a key question: who on your board can interrogate AI decisions with the same rigor used for financial, legal, and cyber issues?
The answer should be an appointed AI expert with clear accountability, writes Jeremy Bossenger, director at BossJansen Executive Search.
The AI risk landscape boards must own
AI impacts nearly every part of an enterprise. Seven key risk areas demand board oversight:
* Model risk – AI systems can hallucinate, drift, or behave inconsistently across groups. When they influence pricing, lending, hiring, or safety, failures become enterprise risks – not IT glitches.
* Regulatory and legal exposure – Boards must ensure AI systems comply with POPIA and King IV in South Africa, as well as global frameworks such as the EU’s AI Act. Weak documentation or oversight can increase liability.
* Bias and discrimination – Training data may encode historical inequities. Unchecked, this produces unfair outcomes that harm people and damage brand equity – posing ethical, reputational, and legal risks.
* Data security and IP leakage – Generative tools can leak confidential data through prompts, outputs, or logs. Third-party model providers and plug-ins expand attack surfaces and licensing complexity.
* Algorithmic collusion – Pricing or bidding algorithms that learn from market signals can inadvertently breach competition law. Boards must ensure proper testing and oversight.
* Operational fragility – Dependence on a single model provider or scarce AI hardware creates single points of failure. Resilience planning is essential.
* Workforce and social impact – Automation reshapes roles and incentives. Without clear plans for reskilling and transparent communication, organisations risk disengagement and backlash.
Why boards need an AI expert
Modern AI risk is technical, socio-technical, and strategic.
Boards need someone who can translate between engineers, risk teams, and directors – someone able to challenge optimistic assumptions and set measurable guardrails.
The role is not to run projects, but to govern them.
What the AI expert should do
* Establish an AI governance framework – Align management with standards like the NIST AI Risk Management Framework and ISO guidance. High-impact use cases may require model inventories, risk classification, and approval gates.
* Advocate for model lifecycle controls – Ensure data provenance, bias testing, red-teaming for safety and misuse, performance monitoring, and documented fallback plans. Each model release should tie to explicit risk acceptance.
* Strengthen legal and ethical compliance – Map AI use to POPIA obligations and global regimes. Mandate privacy by design, consent management, explainability, and audit trails that can stand up in court.
* Secure the AI supply chain – Vet model vendors, hosting, and open-source components. Negotiate contracts that cover confidentiality, retraining on data, uptime, and incident response.
* Build resilience and incident response – Run tabletop exercises for model failure or data leakage. Define thresholds for halting automated actions and escalating to human oversight.
* Integrate AI with strategy and value creation – Maintain an AI portfolio with ROI targets, customer safeguards, and brand alignment. Track both upside (revenue lift, productivity) and downside (complaints, regulatory findings).
* Uplift board literacy – Run teach-ins so every director can interpret AI risk reports and challenge management. Mature boards treat AI oversight like cybersecurity and audits – not as a black box.
The South African and global imperative
South African boards already recognise technology governance as a board duty. The rise of generative and predictive AI heightens obligations under POPIA and intersects with transformation and fairness goals.
Globally, regulators, investors, and customers are moving from encouragement to enforcement.
Boards unable to explain how their AI works – or how it’s governed – will be exposed. Appointing a director with deep AI governance expertise is no longer optional. It’s the fastest way to close the oversight gap, protect people and data, and turn AI from a headline risk into a durable advantage.