The University of Cape Town’s (UCT) Centre for Innovation in Learning and Teaching (CILT) has shared early insights from its emerging research into how staff and students develop the literacies needed to use generative AI (GenAI) ethically, critically and constructively in teaching, learning and assessment.
The project responds to a growing number of requests across the university community for guidance, clarity and support in navigating the rapid rise of GenAI tools.
It also builds on key themes within the UCT Framework for Artificial Intelligence in Education, particularly around responsible, context-appropriate practice.
“There have been a lot of requests from staff on how to navigate challenges that have emerged from GenAI. There has also been a call to provide appropriate AI literacies for staff and students,” says Mishka Reddy, a senior project coordinator and online learning designer at CILT.
She explains that the team’s work aims to understand what support is most needed.
Guidance, clarity and support
The researchers drew on sociomateriality as both a theoretical lens and an analytical tool. This approach sees AI practices not as fixed skills, but as relational, emergent and continually reconfigured through interactions between people, technologies, expectations and contexts.
Reddy explains that concepts like entanglements, elements, assemblages and agency help highlight what shapes the everyday use of GenAI in the university space.
Entanglements speak to the relationships between sociomaterial elements. Assemblages represent people, objects, discourses and technologies whose relations produce particular effects.
Agency, in this instance, is understood as not residing in the tools alone, but emerging from the interactions between them.
A Faculty of Science writing course aimed at strengthening discipline-specific academic writing explored one case study.
“These include AI output inaccuracies, students’ disciplinary knowledge, lecturers’ disciplinary knowledge, academic integrity expectations, AI declaration requirements and teaching materials,” Reddy says.
Competing pressures
The Faculty of Commerce case study, meanwhile, revealed a different set of dynamics.
Learning design consultant Lara Karassellos describes a tension between how AI is used in industry, where comfort with AI tools is increasingly expected, and the academic environment, where concerns about integrity and assessment standards remain high.
“These are competing pressures for staff who try and prepare students for the reality of the industry but also facing academic integrity concerns from professional bodies,” she says.
Many staff also described a lack of clarity and guidance around how assessment should adapt in the age of AI.
The commerce assemblage included the department, lecturers, students, professional bodies, CILT resources, GenAI tools and the broader industry expectations that shape student preparation.
Karassellos emphasises that the research is still at a formative stage, but early findings already show that disciplinary context strongly shapes AI use and AI literacy needs.
“One single, common AI position is not sufficient, and we feel we need to work with staff to understand what they are grappling with, and to tailor our offerings to that,” she says.