AI offers unprecedented opportunities for growth and innovation but also presents significant challenges and risks, especially concerning human rights and corporate governance, write Pooja Dela-Cron, partner, and Paula-Ann Novotny, senior associate from Webber Wentzel.
Imagine a future where your access to justice depends on an algorithm, your freedom of expression is filtered through AI, and your personal data becomes a commodity traded without your consent. This is not a dystopian fantasy but a reality we are inching closer to as artificial intelligence (AI) becomes deeply integrated into our daily lives.
In an era where technology intertwines with daily life, AI emerges as a double-edged sword, cutting through the fabric of society with both promise and peril. As AI reshapes industries, it also casts a long shadow over fundamental human rights and ethical business practices.
Consider the tale of a facial recognition system inaccurately flagging an innocent individual as a criminal suspect – and worse still, flagging individuals based on racial biases. Such instances underscore the urgent need for vigilance and responsibility in the age of AI.
The AI Revolution and the Rule of Law
AI technologies are reshaping the legal landscape, introducing novel forms of digital evidence and altering traditional concepts of the rule of law. Courts worldwide grapple with the admissibility of AI-generated evidence, while law enforcement agencies increasingly rely on facial recognition and predictive policing tools, raising profound concerns about fairness, transparency, and accountability.
The erosion of legal protections and standards in the face of AI’s opaque algorithms threatens the very foundation of justice, emphasising the need for regulatory frameworks that keep pace with technological advances.
The transformative power of AI in the legal domain is both fascinating and alarming. With the increasing spread of fake news, elections can be marred by misinformation, disinformation, and hate speech. AI advances can be key in orchestrating verification campaigns, as a pilot project conducted by the United Nations Development Programme in Zambia’s 2021 elections showed.
In the United States, the use of AI in predictive policing and sentencing algorithms has sparked debate over fairness and bias. Studies, such as the 2016 ProPublica report, have highlighted how algorithms can inherit and amplify racial biases, challenging the very notion of impartial justice.
These issues underscore the necessity for legal systems worldwide to adapt and ensure AI technologies uphold the highest standards of equity, accuracy and transparency.
Intersectionality of AI and Human Rights
The impact of AI on human rights is far-reaching, affecting everything from freedom of expression to the right to privacy. For instance, social media algorithms can amplify or suppress certain viewpoints, while automated decision-making systems can deny individuals access to essential services based on biased data.
Automated content moderation systems on social media platforms can also inadvertently silence marginalised voices, impacting freedom of speech.
The deployment of mass surveillance technologies in countries like China similarly raises severe privacy concerns, illustrating the global need for AI governance that respects and protects individual rights.
These examples highlight the critical need for AI systems that are designed and deployed with a deep understanding of their human rights implications. Ensuring that AI technologies respect and promote human rights requires a concerted effort from developers, policymakers, and civil society.
Closer to home, the issue of digital and socioeconomic divides further complicates the intersectionality of AI and human rights. AI-driven solutions in healthcare and agriculture, for example, have shown immense potential to bridge socio-economic gaps. The balance between leveraging AI for societal benefits whilst protecting individual rights is a delicate one, necessitating nuanced governance frameworks.
Whilst these frameworks are still nascent in many jurisdictions around the world, the United Nations has prioritised efforts to secure the promotion, protection and enjoyment of human rights on the Internet.
In 2021, the United Nations Human Rights Council adopted the UN resolution on the promotion, protection and enjoyment of human rights on the Internet, which resolution was heralded as a milestone and recognises that all of the rights people have offline must also be protected online.
This resolution came off the back of other UN resolutions, specifically condemning any measure to prevent or disrupt access to the internet and recognising the importance of access to information and privacy online for the realisation of the right to freedom of expression and to hold opinions without interference.
In 2023, the United Nations High Commissioner for Human Rights, Volker Türk, said the digital world was still in its early days. Around the world, more children and young people than ever before are online, either at home or at school, but depending on birthplace, not everyone has this chance.
The digital divide means a staggering 2,2-billion children and young people under 25 around the globe still do not have access to the Internet at home. They are being left behind, unable to access education and training, or news and information that could help protect their health, safety and rights.
There is also a gap between girls and boys in terms of access to the Internet. He concluded by saying “It may be time to reinforce universal access to the Internet as a human right, and not just a privilege”.
Corporate Responsibility in the AI Era
For corporations in South Africa, Africa, and globally, AI introduces new risk areas that must be navigated with caution and responsibility. General Counsel, the world over, are required to investigate and implement strategies around issues of privacy, data protection, and non-discrimination which are paramount, as the misuse of AI can lead to significant reputational damage and legal liabilities.
Corporations must adopt ethical AI frameworks and corporate social responsibility initiatives that prioritise human rights, demonstrating a commitment to responsible business practices in the digital age.
Corporations stand at the frontline of the AI revolution, bearing the responsibility to wield this powerful tool ethically. Google’s Project Maven, a collaboration with the Pentagon to enhance drone targeting through AI, faced internal and public backlash, leading to the establishment of AI ethics principles by the company.
This example demonstrates the importance of corporate accountability and the potential repercussions of neglecting ethical considerations in AI deployment. It also highlights that influential corporations hold a significant level of leverage in their environments. This leverage should be used to progress respect for human rights across the value chain.
The Challenge of Regulation
Regulating AI presents a formidable challenge, particularly in Africa, where socio-economic and resource constraints are significant. The rapid pace of AI development often outstrips the ability of regulatory frameworks to adapt, leaving gaps that can be exploited to the detriment of society.
Moreover, regulatory developments in the Global North often set precedents that may not be suitable for the African context, highlighting the need for regulations that are inclusive, contextually relevant, and capable of protecting citizens’ rights while fostering innovation.
The fast-paced evolution of AI technology poses a significant challenge to regulators, especially in the African context, where resources and expertise in technology governance are often limited.
The European Union’s General Data Protection Regulation (GDPR) serves as a pioneering model for embedding principles of privacy and data protection in technology use, offering valuable lessons for African nations in crafting their regulatory responses to AI.
Towards a Sustainable Future
The path towards a sustainable future, where AI benefits humanity while safeguarding human rights, requires collaboration among businesses, regulators, and civil society. Stakeholders must work together to develop and implement guidelines and standards that ensure AI technologies are used ethically and responsibly.
Highlighting examples of responsible AI use, such as initiatives that provide equitable access to technology or projects that leverage AI for social good, can inspire others to follow suit.
Collaboration is key to harnessing AI’s potential while safeguarding human rights and ethical standards. Initiatives like the Partnership on AI, which brings together tech giants, non-profits, and academics to study and formulate best practices on AI technologies, exemplify how collective action can lead to responsible AI development and use.
As AI and related technologies continue to transform our world, we must not lose sight of the human values that define us. The intersection of AI, business, and human rights presents complex challenges but also opportunities for positive change, not only for governments but for corporations too.
By fostering ongoing dialogue and cooperation among all stakeholders, we can shape a future where technology serves humanity’s best interests, ensuring that the digital age is marked by innovation, equity, and respect for human rights. Corporate governance frameworks will need to adapt in response to these advances.
As Africa navigates the complexities of AI integration, the journey must be undertaken, byte by byte, with a steadfast commitment to ethical principles and human rights. The continent’s diverse tapestry of cultures and histories offers unique insights into responsible AI governance.
By prioritising transparency, accountability, and inclusivity, African governments and corporations can lead the way in demonstrating how technology, guided by human values, can be a powerful tool for positive change.
In the digital age, the fusion of innovation and ethics will define Africa’s trajectory, ensuring that AI becomes a catalyst for empowerment rather than a source of division.