As artificial intelligence (AI) continues to sweep into higher education with Silicon Valley-style excitement, Africa faces a far more urgent and complex challenge, writes Dr Nyx McLean, head of research and postgraduate studies at Eduvos.

While higher education institutions in the Global South embrace AI tools with open arms, we risk importing technologies that don’t reflect our realities, values, or aspirations.

Speaking at the 2025 SARUA Conference felt like a turning point. It underscored the need to shift from isolated institutional efforts toward a united regional strategy, one grounded not in Western tech priorities but in African perspectives.

Having studied AI’s role in South African higher education, I’ve seen both its promise and its pitfalls. The question is no longer whether we should use AI – its presence is unavoidable. The real question is whether we’ll let Western-developed technologies reshape our education systems on their terms or forge our own path.

 

Whose Knowledge Does AI reflect?

At Eduvos, where I serve as head of research and postgraduate studies, our students regularly use AI tools like ChatGPT for academic support. On the surface, it’s a convenient aid. But these tools are trained predominantly on Western content, often reflecting assumptions and perspectives that don’t match our lived experiences.

This matters profoundly in a country like South Africa, where we’ve made significant strides in decolonising our curricula since 2015. AI responses to topics like African governance or history often perpetuate the very stereotypes and cultural biases we’ve tried to dismantle.

AI systems learn from vast repositories of online content, most of which originate in the Global North. When students use them for learning, they’re essentially outsourcing their education to a worldview that may not understand their context or future.

 

Efficiency Isn’t Equity

In under-resourced educational environments, AI’s efficiency can be alluring. Tools that assist with writing, coding, and research seem like a quick fix. But education is about more than information – it’s about critical thinking, grappling with uncertainty, and personal growth.

At Eduvos, a period of open-book, take-home exams across 87 modules and over 1 600 assessments brought this tension into sharp focus. As AI tools became part of students’ learning ecosystems, educators and learners alike had to confront what meaningful learning really looks like in an AI-enabled world.

 

A Technocritical Approach

Rather than blindly adopting or rejecting AI, we need a technocritical mindset—one that recognises technology as shaped by the politics, values, and power structures of its creators.

This approach rests on five core principles:

Participatory engagement: Instead of passively adopting technologies developed elsewhere, we must insist on having a voice in how these tools are designed and implemented. Big Tech companies shouldn’t be the sole arbiters of what constitutes appropriate AI use in education.

Critical engagement: We must ask difficult questions about who benefits from these technologies and how they might affect our students’ learning experiences. Discomfort with easy answers should be seen as productive rather than problematic.

Real-world application: It’s not enough to theorise about AI’s impact. We need to pilot these tools in actual learning environments, adapt them based on what we discover, and develop solutions that work for our specific contexts.

Reflexive dialogue: Educators must examine their own assumptions about AI’s benefits and risks, questioning how these perspectives influence their teaching practices and affect student outcomes.

Ethics of care: Throughout this process, we must prioritise the wellbeing of our students, particularly considering how AI adoption might create new forms of disadvantage or exclusion.

 

What This Looks Like in Practice

We’ve reimagined assessments so they can’t be outsourced to AI but do still allow for technocritical use of AI. Students conduct interviews, engage with primary sources, reflect on their learning, and present their insights through screencasts.

In Law, students test AI-generated answers against South African case law. In Commerce, they compare AI outputs with trends in their communities. In Humanities, they analyse local cultural material to critique global narratives.

These methods don’t reject AI – they integrate it critically. Students are encouraged to see where AI supports learning, and where it falls short.

 

A Moment of Choice

We are at a crossroads. Will we allow AI to become a new form of digital colonisation, where African students passively consume Western content? Or will we take ownership of this moment, shaping AI use in a way that reflects our diverse identities and ambitions?

This isn’t about techno-optimism or doom. It’s about what scholars call “muddling through” – acknowledging complexity while staying grounded in core educational values like justice, equity, and critical inquiry.

Our aim shouldn’t be perfection, but ongoing learning and adaptation. That’s how we ensure AI becomes a tool for empowerment rather than exclusion.

 

Leading, Not Following

As institutions across Africa explore AI integration, we have a chance to lead with purpose. We can show the world how thoughtful, context-driven innovation can enhance—not erode—educational integrity.

The future of AI in education must be co-authored by African educators, students, and communities – not dictated by the likes of Silicon Valley.

That message resonated strongly at the SARUA 2025 Conference, where I presented our Eduvos case study alongside my colleagues Dr Mandi Joubert and Dr Miné De Klerk. Under the theme “Innovating Higher Education for Sustainable Development across the SADC”, the conference made one thing clear: adaptation is essential, but it must never come at the expense of our own agency.