As artificial intelligence (AI) uptake surges, it is important to consider ethics in AI development and deployment, to avoid risks such as bias, discrimination and exclusion.
This is according to speakers in a webinar hosted by the Institute of Information Technology Professionals South Africa (IITPSA) on Responsible AI in Africa.
The webinar, hosted by the IITPSA Artificial Intelligence & Robotics special interest group (SIGAIR) and Social & Ethics Committee, unpacked how AI could be used responsibly to drive prosperity for all in Africa.
IITPSA Social & Ethics Committee Chair Josine Overdevest highlighted several ethical risks in AI: “AI can tend to bias and discrimination – for example when you only have developers from one demographic. They may overlook perspectives from other people.” She noted that privacy risks could emerge in data capture and processing, and in the use of AI in surveillance.
“In just and interconnected societies, transparency is important – for example an avatar should be identified as such, or people should be informed of the reasons why certain decisions were made by AI,” she said.
AI’s impact on people had to be considered carefully, Overdevest said. “We must consider the issue of job displacement – what do we do with people who lose their jobs due to AI, and how do we upskill them? Another important issue is digital divides – how do we ensure that everyone we design these technologies for can access, use and understand them?”
She highlighted concerns around autonomy and control, addiction and manipulation, misinformation, and ethical decision making. “We must ensure that AI systems make ethical decisions that take into account cultural and moral values,” she said. “Technology development outpaces regulation development, but we can’t wait until there is more regulation around AI ethics before we act responsibly.”
Overdevest emphasized the importance of businesses that design and deploy AI in the market doing so responsibly. “The business incentives of ethical, responsible AI include cost reduction, ESG compliance, brand resilience, and employee recruitment and retention – particularly among younger employees who want to work for a company that is ethically responsible.”
AI opens new opportunities in Africa
Zambian Software Engineer and Rhodes scholar Dr. Fredah Banda outlined the development potential for AI in Africa: “AI has the potential to revolutionise sectors such as healthcare, agriculture, education and finance.”
Dr Banda said: “In healthcare, AI in image processing and analytics is being used to enhance diagnostic accuracy, which speeds up the process and healthcare access. Apps and online services are increasing access to healthcare. AI helps address complex medical challenges more effectively, and also supports health monitoring and reporting, including smart wearables which can track health metrics to support a diagnosis. Examples of AI in practice in African healthcare include HealthMap that tracks outbreaks, Babylon Health and mPharma.”
In agriculture, she noted that AI is helping support food production, for example being used along with drones to help farmers monitor crop growth and detect disease. AI is also used in solutions to predict weather patterns and inform planting decisions, and to improve supply chain management and reduce food waste.
“In finance, the most common use of AI is in automated credit scoring,” she said. “However, it is also playing an important role in areas such as fraud detection, risk management, algorithmic trading and market research.”
“AI is taking the world of education by storm by providing personalised learning and individualised feedback to learners. It can also be used to assist teachers with grading papers, creating lesson plans and identifying students who are struggling. It also improves access to education for students in remote or underserved areas,” Dr Banda said.
Priorities for responsible AI
Surveys of webinar participants assessed their views on ethical considerations in AI. On the question of which ethical consideration is most important for responsible AI in healthcare and telemedicine, 73% said privacy and data security, 13% said transparency in AI decision-making, and 7% each said inclusivity in healthcare access and fairness in diagnosis. In financial services and fintech, 47% said avoiding discrimination in credit scoring should be a priority, 40% said data privacy and security should be the top consideration, and 7% each said the top consideration should be ensuring informed consent and equitable access to financial services.
On top considerations for responsible AI in agriculture, 36% voted for promoting sustainable agricultural practices, 29% said environmental impact reduction, 21% said equitable access to farming technology and 14% said fair treatment of small-scale farmers. On education and e-learning considerations, 33% said promoting educational equity should be the top consideration, 25% each ranked student data privacy and addressing bias in recommendations as a priority, and 17% said personalised learning experiences should be a top consideration.