A South African paper on the role of human centric AI governance in the pursuit of sustainable healthcare for the African continent recently received the Titanium Award for Best Paper at the 9th Annual Board of Healthcare Funders (BHF) Conference in Cape Town.
Authors Dr Odwa Mazwai, MD of Universal Care and Alicia Tait, director of legal affairs risk and corporate governance at Universal Healthcare, brought their respective expertise in medicine and law to provide a fresh perspective on the use of AI in healthcare.
The peer-reviewed paper examines global trends in AI regulation, the principles of a human-centric approach and the impact on sustainable healthcare with a focus on collaboration.
“African policymakers urgently need to create a harmonised regulatory framework that will protect the continent’s data assets and promote the wellbeing of its citizens. At the same time, organisations must commit to the principles of responsible AI by implementing the practical steps outlined in the paper,” says Tait.
Organisations can adopt the draft AI policy presented by the duo during the conference to begin implementing the necessary AI governance structures with detailed roles and responsibilities.
“Corporates can dispel the fear that AI innovation will replace employees and reassure them that its main purpose is to enhance human capabilities as part of their commitment to the ethical use of AI,” she notes.
Passionate about the potential of personalised healthcare models, Dr Mazwai pointed out that AI holds immense promise for radically improving healthcare systems with broader access, better patient outcomes, reduced costs, and greater healthcare sustainability. However, he warned that the absence of effective AI governance can lead to harm, perpetuated biases and manipulation through the spreading of disinformation.
“The information collected by AI systems can be used to nudge humans on a subconscious level to make certain decisions. The Cambridge Analytica case study is an alarming example of failed AI governance. In this example, people’s Facebook data was processed by AI, and those outputs were used for targeted advertising to effectively influence voting during the 2016 US Presidential election,” he says.
Tait further emphasises the legal risks associated with AI, stressing that although current legislation in Africa covers aspects of AI to a certain degree, there are substantial gaps that can only be filled by specific AI laws that will enable regulators to hold AI actors accountable.
“Privacy laws seek to put us in control of our personal information, but AI tools can be too complex to allow people to make informed decisions about the information created about them in this manner. Furthermore, datasets that are inaccurate and fully representative of the population we are trying to serve can cause adverse outcomes.
“Organisations and healthcare providers need to be aware of the potential liabilities associated with the use of AI tools. This is particularly relevant in the African context, where social and gender equalities impact the availability of data that underpin machine learning. These aspects highlight why AI must be regulated,” says Tait.
According to the comparative analysis of global trends in AI regulation referenced in the paper, Africa is the lowest-scoring continent in terms of its governance, data infrastructure, and technology pillars.
“As the most genetically diverse continent on earth, Africa is uniquely placed to benefit from data as a commodity. However, due to foundational barriers, there is a risk that big technological companies from other continents may colonise the digital economy. We, therefore, need to focus on effective regulation that will create an environment where we can collaborate and partner with investors without losing ownership of the data,” she says.
Sources cited in the paper predict that AI has the potential to contribute over $15-trillion to the global economy by 2030, including $1,2-trillion on the African continent, which would significantly contribute to achieving the United Nations’ Sustainable Development Goals.
Tait points out that at a global level, the United States is the top-scoring country despite the fact that it does not have federal laws governing AI. The European Union promulgated the first AI-specific law and regulated AI models according to their level of risk. China, which attained 16th place on the AI Readiness Index, only regulates generative AI, and the state remains in control of technology.
“What we see is a trend towards a risk-based approach in AI regulation, underpinned by international principles and country-specific policy initiatives. It is imperative that AI-generated work is labelled as such, and users must have the right to insist on intervention by a human,” says Tait.
Of significant importance is that South Africa scored well above the global average for technology and data infrastructure, but the total was negatively impacted by the low score for governance and ethics in the government pillar. According to Tait, this clearly indicates where focus is lacking.
While there has been a significant increase in the publication of African strategies on AI, only 10 of them have national AI governance strategies. At the same time, there are still countries without any form of data protection legislation, and where it is in place, regulators are often not empowered to enforce the laws.
“A lack of internet connectivity presents further challenges. Research by various sources shows that over 300-million Africans live more than 50km from a stable broadband or internet connection, and around 500 million people do not have identification, resulting in the lack of digital IDs.
Africa is also home to 2 144 languages, which introduces complexities in the training and use of AI models. There is also the matter of women being underrepresented in data sets, which needs to be addressed,” she explained.
A human-centric approach
Commenting on the need for a human centric approach to AI, Tait and Mazwai clarified that humans must retain control and make the final decisions where AI models are used.
“For example, a medical doctor must be able to inform their patient how an AI model works, what the risks are and why it reached a conclusion so that the patient can give informed consent. Where the model is too complex to be fully explainable, then this must be clearly communicated upfront.
“It is critical that the source and processes of AI models are carefully documented to allow proper risk management. Human-centric AI must uphold human rights, enable humans to challenge AI decisions, and insist on human alternatives.
“This is fundamental to harnessing AI’s true potential to improve healthcare services. It has further applications for mitigating healthcare’s impact on the environment and further contributing to sustainability,” he points out.
Dr Johan Pretorius, CEO of Universal Healthcare, congratulated Tait and Dr Mazwai on their award-winning paper, emphasising the far-reaching implications of their work on AI governance in healthcare in South Africa and worldwide.
“Their insightful analysis underscores the need for proactive steps to ensure that AI, as a powerful technology, is developed and used in ways that promote the health and wellbeing of all people, regardless of their location. AI is a tool for humanity, and it is crucial that we establish governance frameworks that reflect this and protect the interests of patients and populations everywhere. We urge policymakers globally to take immediate action on this critical issue,” concludes Dr Pretorius.