The total global economic impact of artificial intelligence (AI) in the period to 2030 is estimated at $15,7-trillion. Furthermore, AI augmentation will create $2,9-trillion of business value and 6,2-billion hours of worker productivity around the world.
The technology can therefore be a powerful force for good and deliver an extraordinary competitive advantage. But, AI also brings with it ethical challenges that are exacerbated by the scale of deployment and range of applications.
According to Josefin Rosén, principal advisor: analytics at SAS: “Central to this is the matter of bias, a significant issue in AI. After all, most AI applications are based on machine learning algorithms that learn from the data they are fed. Inevitably, this means that AI tools reflect the biases of their developers.
“If discriminating data is feeding the machine learning algorithm, then the output will be discriminating as well. As individuals, we express 180 known cognitive biases. Some are obvious, for example gender, racial, and cultural, while others are less so.”
Biases transmitted to the algorithm are just the beginning of the potential risk. The bias will also most likely be greatly amplified. An AI application can make thousands of decisions every minute with the technology we have today, using enormous computational resources and distributed calculations. This means that if something goes wrong, it could quickly go very wrong.
Data selection matters
Rosén explains that companies must also take a closer look at selecting data that is appropriate for use. For example, is it appropriate to use data generated on an individual 20-years ago instead of the past three months? “The risk of confirmation bias where decision-makers seek out the information that reinforces their pre-existing views is probably the most dangerous.”
The lack of diversity in data and development can end up in issues for the end user. For instance, there are multiple examples where voice recognition performed worse for women than men and where facial recognition performed worse for women, especially black women.
Imagine the HR department wants to hire an engineer using an AI recruitment tool. It is part of human nature to judge a person based on how he or she looks or appears. Machine learning algorithms are trained on historical data and, historically, men held most of these positions.
Therefore, if the data is not balanced and the variables carefully considered, it is likely that the tool prefers male candidates over female candidates as successful applicants.
“Amazon had to retire its AI recruitment tool because of its bias against women. Even though the company removed the gender variable, the AI still used elements to predict gender. For instance, men use verbs like ‘execute’ more frequently in their applications. The algorithm therefore identified the candidates that matched the historical data about who the company preferred. It only did its just, but that reflected historical biases in the training data used to create the algorithms,” says Rosén.
Inclusivity matters
Even though the AI industry in Africa is still emerging compared to the US, Europe, and Asia, innovations are happening. For instance, the Zindi data science challenge platform featuring a community of thousands of African data scientists solving the continent’s toughest challenges.
According to Kelly Lu, lead: advanced analytics and artificial intelligence at SAS in South Africa, this makes it even more important to have inclusivity in the AI development process.
“People who build these solutions must reflect society especially when it comes to the gender and racial mix. Regulation is also required that reflects the evolution of the technology and its potential to benefit everyone on the planet. But before we can have a discussion on its ethical usage, companies’ adoption of the technology must mature.”
Both Rosén and Lu agree that regulators across markets are playing catch-up. However, there are positive signs with ethics frameworks by governments reflecting the same, universal principles such as fairness, non-discrimination, privacy, security, safety, and transparency.
Convenience versus control
“It comes down to operationalising responsible AI. Companies can look at product features to do this, but also need to consider data quality, data governance, and model governance. Regulators are also considering such tools when it comes to developing responsible AI frameworks,” says Lu.
Fortunately, there are signs that businesses will shift towards responsible AI by design in the same way that privacy by design is done today thanks to the regulatory environment.
“With AI power comes responsibility. And we all need to accept our roles in this regard to remove as much of the biases in developing AI algorithms as possible,” concludes Rosén.