King V’s recognition of artificial intelligence (AI) as part of the broader information governance landscape signals a maturing view of corporate accountability in the digital era. It acknowledges that technology is no longer peripheral to governance, it is governance.
For South African organisations, this marks a crucial shift from asking if artificial intelligence (AI) should be governed, to understanding how to govern it well.
The newly-drafted King V Code on Corporate Governance for South Africa introduces a clear and deliberate stance on how organisations should approach AI.
As it becomes embedded in business operations and decision-making, King V positions AI squarely within the realm of governance, ethics, and accountability — not just technology.
“Artificial intelligence offers extraordinary potential for business efficiency and innovation, but without clear governance it also introduces significant risk,” says Grant Christianson, group legal advisor and company secretary at e4.
“King V’s inclusion of AI within its information governance principle is an important evolution. It requires boards to not only understand the opportunities that AI presents, but to actively manage how it is used, ensuring that it aligns with the organisation’s values and legal obligations.”
AI in King V: What’s new
King V places information governance – encompassing IT, data, and emerging technologies such as AI – under Principle 9: Information Governance.
This principle makes it explicit that boards must oversee how technology is used, ensuring that its deployment is ethical, transparent, human-supervised, and aligned with organisational purpose.
In particular, the Code expects that organisations will: ensure appropriate human oversight of AI systems; align AI use with ethical principles such as fairness, transparency, and trust; and integrate AI-related risks (bias, privacy, cybersecurity, reputational) into enterprise risk management frameworks.
“King V effectively broadens the board’s accountability,” adds Christianson. “It’s no longer sufficient for directors to defer AI oversight to IT or compliance teams. Boards now have a duty to understand the technology, its implications, and its impact on stakeholders.”
Why this matters in the SA context
South Africa is seeing rapid AI adoption across sectors, from financial services and retail to healthcare and mining. However, our regulatory and social environment adds unique complexity.
With data privacy governed by the Protection of Personal Information Act (POPIA), and a national backdrop of trust deficits and inequality, the ethical governance of AI cannot be treated as optional.
For listed entities, financial institutions, and companies handling sensitive data, aligning with King V helps strengthen stakeholder confidence, investor assurance, and regulatory preparedness.
“Governance is not about stifling innovation,” says Christianson. “It’s about enabling it responsibly. AI can drive competitive advantage, but only when organisations have the proper guardrails to protect people, data, and reputation.”
What it means to businesses
Boards are now explicitly responsible for governing AI and emerging technologies as part of strategy and value creation.
Explainability and human oversight are not optional features, but rather required characteristics of the technology’s deployment.
AI and data risks must be treated as enterprise risks, with appropriate monitoring and assurance.
The “apply and explain” principle ensures that organisations articulate how governance is applied in practice.
Balancing ethics and innovation
Some have noted that King V’s language on AI may feel cautious — but Christianson believes this is an opportunity rather than a constraint.
“The proportionality principle allows organisations to apply AI governance in a way that fits their size and complexity. The key is not to slow down innovation, but to innovate safely. Responsible governance is the bridge between risk and reward.”
Much like cybersecurity, AI governance must be embedded early in development cycles, not retrofitted after deployment. This mindset ensures that ethics and compliance enable innovation, rather than restrict it.
Building the governance muscle
Implementing King V’s expectations will require boards and executives to build stronger technology and data governance competencies.
Training, independent assurance, and cross-functional collaboration between legal, risk, IT, and business functions will become essential.
As Christianson notes: “Effective AI governance doesn’t happen by accident — it’s deliberate, structured, and continuous. Organisations that invest in understanding their data and technology risks now will be far better positioned to lead responsibly in the years ahead.”
Those that embed ethical and transparent AI practices today will not only meet compliance standards, but will set the benchmark for trustworthy innovation in the Fourth Industrial Revolution.