In 2016 Klaus Schwab, executive chairman of the World Economic Forum reflected on the intersection of technology and humanity, writes Ntsako Baloyi, senior manager in the technology business at Accenture, Africa.

Schwab expressed concern about how technology like smartphones could affect our compassion and connection with others, possibly depriving us of meaningful engagement: “Similarly, the revolutions occurring in biotechnology and AI, which are redefining what it means to be human by pushing back the current thresholds of life span, health, cognition, and capabilities, will compel us to redefine our moral and ethical boundaries.”

Schwab’s insights from nearly a decade ago highlight the importance of exploring and centring ethics in responsible AI adoption. While organisations are getting swept up in the AI revolution, the pressure to rapidly adapt can mean ethics become an afterthought. However, concerns like data privacy, bias, and human cost can put employees and consumers at risk and should be at the core of any digital transformation strategy.

Ethical complexities may slow AI adoption, but organisations must transform or risk obsolescence. Our research reveals that while 83% of organisations have accelerated transformation efforts in response to disruption, most fall short on speed, strategy, and sustainability. Only 9% are what our research calls “reinventors,” leading the charge with interconnected approaches, while 81% are “transformers,” still early in their reinvention journey. 10% remain “optimisers,” neglecting reinvention to their detriment.

The benefits of transformation are clear: reinventors anticipate 20% of value within six months and 45% within a year – a 1,6x faster pace than last year. Beyond profit, AI adoption drives productivity, innovation, and long-term survival. However, transformation without responsible AI implementation can mean that the risks outweigh the benefits.

We define responsible AI as the practice of designing, building and deploying AI in a manner that empowers employees and businesses and fairly impacts customers and society.

Broadly speaking, the key ethical considerations in a responsible AI strategy are ensuring data privacy and security and taking a people-centric approach so that employees are not left behind. Data privacy is a cornerstone of responsible AI adoption, ensuring that individuals’ sensitive information is protected while fostering trust between organisations and their stakeholders.

In South Africa, where cyberattacks are increasingly frequent, the stakes are high. According to the Council for Scientific and Industrial Research (CSIR), 88% of organisations have experienced at least one data breach, with many targeted multiple times. Breaches not only expose personal data but also damage reputations and erode public confidence.

While data privacy focuses on protecting sensitive information, security encompasses the broader systems that safeguard organisations from external threats. As technology evolves, so do the methods used by malicious actors to exploit vulnerabilities.

High-profile cyberattacks on entities like the National Health Laboratory Service and Transnet underline the critical need for robust security measures across South African institutions.

To mitigate these risks, organisations must prioritise comprehensive cybersecurity frameworks and data protection measures. This includes recruiting skilled professionals to maintain and update security systems, fostering a culture of awareness among employees, and leveraging AI-driven tools for real-time threat detection and response. A secure environment not only protects assets but also builds trust among employees, partners, and customers.

A people-centred approach is critical to ensuring AI is a tool for empowerment rather than exclusion. In a society marked by stark inequalities, AI has the potential to either bridge or exacerbate disparities. According to the Human Sciences Research Council, AI could widen existing gaps in employment and access to essential services if not carefully managed.

A key element of this approach includes transparency and accountability in addressing potential biases in AI systems. Bias can lead to discriminatory outcomes, reinforcing systemic inequalities. For example, algorithms trained on unbalanced datasets may unintentionally disadvantage certain groups, underlining the importance of rigorous fairness testing.

Education and involvement are also crucial. Organisations that actively reskill their workforce and communicate how AI will enhance – not replace – jobs stand to gain both employee trust and increased value. Generative AI, for instance, can take over repetitive tasks, freeing employees to focus on strategic and creative work.

Organisations can also look to overarching regulatory frameworks for guidance. Global efforts to regulate AI have gained momentum, setting examples for South Africa. The UN’s 2024 resolution brought all 193 member states together to govern AI collectively, while the European Union’s binding AI risk framework remains the most comprehensive to date.

In South Africa, regulation is lagging though existing laws like the Protection of Personal Information Act (POPIA) and the Competition Act offer a foundation for AI governance. Proactive monitoring of global and local developments is essential.

Companies must prepare for evolving regulations by embedding fairness, transparency, and accountability into their AI systems from the outset.

The rapid evolution of AI requires a mindset of constant reinvention. Organisations must prioritise ongoing risk management, frequent assessments of fairness, transparency, and safety, and investments in sustainable practices.

AI is not a one-time solution but an evolving tool that demands adaptability and innovation. By centring their strategies on ethical AI adoption, organisations can balance the promise of generative AI – enhanced efficiency, productivity, and innovation – with its potential risks. A commitment to transparency, robust security measures, and human-centric policies will ensure a sustainable and inclusive transformation.

Responsible AI is not just a necessity, but a core principle of responsible business – and it should be the priority of everyone involved.