The University of Cape Town (UCT) and the Global Centre on AI Governance (GCG) have launched the African Hub on AI Safety, Peace and Security, a platform designed to ensure that African perspectives are at the centre of global artificial intelligence (AI) safety debates.

“The hub is about more than science – it’s about societal impact,” says Associate Professor Jonathan Shock, the interim director of the UCT AI Initiative. “Our aim is to ensure Africa’s priorities are represented in global AI debates while advancing research, building capacity and influencing policy.”

UCT Vice-Chancellor Professor Mosa Moshabela describes the hub as a proud moment that comes with great responsibility for UCT.

He says that, while AI presents remarkable opportunities, the risks cannot be ignored. “We are not only launching a hub; we are affirming our role in leading Africa’s contribution to the future of AI safety. At UCT, our mission is rooted in research, teaching and societal engagement – this initiative speaks to all three.

“By anchoring the hub here, we are saying that Africa’s voice matters, that Africa’s knowledge matters, and that Africa’s future in AI must be secured on its own terms. As a continent, we must ensure AI tools are developed responsibly and inclusively. This hub is about building a community, drawing on our strengths and amplifying Africa’s role in shaping the future of technology.”

Emily Middleton of the UK Department of Science, Innovation and Technology, underlines Africa’s central role in these conversations.

“Despite people in African countries being most exposed to AI-related risks, they remain under-represented in shaping AI systems. We must confront challenges like the digital divide, tech-enabled gender-based violence, and training data that does not represent Africa’s wants and needs. UCT’s expertise positions this hub as a much-needed centre of gravity for Africa-led research with global implications.”

Maggie Gorman Vélez of Canada’s International Development Research Centre, echoes this sentiment, and says the hub joins a growing international movement.

“This is one of 13 multidisciplinary labs globally – a unique feature of the AI for Development programme. By fostering safe and inclusive AI ecosystems, we empower local experts to develop their own solutions and mitigate risks through the implementation of sustainable AI policies and standards.

“Global collaboration on AI brings benefits to all nations, and we are proud to partner with UCT and the Global Center on AI Governance to ensure Africa’s priorities are represented on the global stage.”

Associate Professor Shock highlights neglected issues of AI in African contexts. “Much of global AI safety work has focused on existential risks. While these are important, there has been far less attention on the peace and security consequences of AI for African societies. Issues such as disinformation during elections, AI-driven surveillance and impacts on the labour market must be studied with urgency.

“It is often the case that AI systems developed outside of Africa do not work well within the continent because of the diversity of data. Yet this diversity is an asset. Tools developed here may prove more robust globally.”

Over the next three years, the hub will focus on research, capacity strengthening and policy influence. Partnerships with networks such as Masakhane, Deep Learning Indaba, AfriClimate AI, GRIT and CAIR will support this vision.

Shock also highlights the opportunities that AI presents in agriculture, healthcare, education and democracy. From predicting crop diseases and improving irrigation to providing low-cost diagnostic tools and detecting election disinformation, AI can be transformative, if applied responsibly.

“In agriculture, satellite imagery combined with local weather data can help smallholder farmers protect yields. In healthcare, low-cost diagnostic tools and telemedicine can save lives. In education, AI can act as a responsible, personalised tutor – vital in contexts where teachers are scarce. And in democracy, AI systems that flag disinformation in African languages can protect election integrity and strengthen democratic resilience.”

Dr Chinasa Okolo, the founder and scientific director of Technecultura, urges a reframing of AI safety from an African perspective. She calls for multilingual evaluation systems, regional infrastructure investment and public participation in AI governance.

“Mainstream AI safety has been dominated by Western-centric approaches that often exclude the lived realities of the Global South. Without deliberate effort to centre African perspectives, global AI safety initiatives risk perpetuating the very exclusions they claim to address.”

Dr Okolo says international AI safety summits in Bletchley, Seoul, Paris and the upcoming gathering in New Delhi have elevated the debate but remain limited in inclusivity.

 

Story by Lyndon Julius, UCT News