Generative AI (GenAI) is proving increasingly indispensable for certain aspects of daily work, raising questions about the ethics of passing its output off as one’s own work – and the fairness of pitting AI-generated content against human-generated content.

This is the view of members of the Institute of Information Technology Professionals South Africa (IITPSA) Social and Ethics Committee and chapter leadership, who have called for transparency and frameworks for declaring the use of AI.

Kudzayi Chipidza, chairperson of the IITPSA Social and Ethics Committee, concedes that the ethical boundaries around the use of AI are complex and constantly evolving. “As GenAI becomes increasingly integrated into creative, professional, and academic domains it raises pressing questions about transparency, accountability, and fairness,” he says.

Chipidza says presenting GenAI outputs as human-generated content raises a number of ethical questions.

“For example, when is disclosure necessary? If GenAI contributes substantially to the output (for example, generating code, writing reports, or creating art), ethical practice demands acknowledgment,” he says. “This brings us to the question of what counts as ‘substantial’. If GenAI goes beyond basic assistance (for example, autocomplete, grammar checks) and generates core ideas, structure, or content then transparency is required.”

He notes that professional norms matter – in academia, journalism, or legal work failing to disclose AI assistance could constitute plagiarism or misrepresentation. In business, it may mislead stakeholders.

“Using GenAI to enhance productivity is ethical, but presenting AI-generated work as entirely human-made could unfairly disadvantage those who don’t use such tools,” he says.

Chipidza also highlights the risks of using GenAI-generated content without due caution: “Human oversight is key: Even if GenAI aids in creation, the user is responsible for verifying accuracy, fairness, and originality. Passing off unchecked AI work as one’s own risks spreading errors or misinformation. In fields like law or medicine, relying on AI without proper review could have serious consequences, making attribution and validation critical. There are also plagiarism concerns: If GenAI reproduces copyrighted or unattributed material, the user bears responsibility for proper sourcing.”

 

Ethical development and use  

Professor André Calitz (PhD, DBA), Distinguished and Emeritus Professor at the Department of Computing Sciences at Nelson Mandela University, Professional Member and Fellow of IITPSA and Eastern Cape IITPSA Chapter Committee member, says: “Academics are presently grappling with and debating the question: How do we instil the responsible and transparent use of AI tools in our students?

“Ethics and ethical behaviour only apply to humans; therefore, ethical AI development and deployment are our responsibility,” he continues. “Humans are responsible for selecting the training data, the training of large language models, and the development and testing of algorithms. A human-centric approach to AI is required for the development and deployment of AI systems that respect human dignity and autonomy. Academics and professional bodies need to provide guidelines that inform and direct both the use and deployment of these new AI-enabled systems and technologies.”

 

Checks and balances

IITPSA Social and Ethics Committee member Constandious Takura Munakandafa emphasises that GenAI technology is maturing much faster than policy implementation driven by human thinking and strategy.

He says: “Few African countries have adopted AI policies – with most nations on the continent heavily dependent on outdated ICT policies,” Munakandafa says. “This has the potential to cast a shadow of doubt on whether the current artifacts from these countries are original human work or generative AI output rubber-stamped with a human name, but which have not passed the checks and balances of AI ethical guidelines.

“It has become more important than ever for governments, industry, and institutions like IITPSA to take a leading role towards contributing to the drafting and implementation of AI policies that are future-centric and informed by robust ethical guidelines.”

Chipidza adds that the ethical use of GenAI hinges on intent and impact. “Ethical boundaries are violated when GenAI’s role is deliberately hidden to claim undue credit, when GenAI-generated content is used without verification leading to harm, or when the work misrepresents human skill or effort – such as submitting a GenAI-written essay as original thought,” he says. “Responsible practice means being transparent when AI plays a major role, ensuring human accountability for final outputs, and adapting norms as GenAI’s role in workflows evolves.”

He suggests practical approaches such as tiered disclosure of GenAI use, depending on the level of reliance on GenAI. Industry-specific standards should also be followed when using GenAI, he says.

“For example, academia should follow institutional policies, creative professions should clarify AI’s role (for example, stating ‘AI-assisted’ or ‘AI-generated’), while business should ensure compliance with internal policies on AI use,” Chipidza says.