We’re on the cusp of the Fifth Industrial Revolution, writes Robin Fisher, area vice-president of Salesforce – Emerging Markets.
One that will largely be defined by the growth of artificial intelligence (AI), which has taken giant leaps in helping us accomplish tasks more quickly and efficiently.
Although there is much more potential to make a positive impact, we are also mindful of how AI can be problematic. We’ve seen this in instances where voice recognition software has produced biases against female voices and crime-prediction tools have reinforced discriminatory policing.
To build AI with confidence, we must focus on inclusive measures and ethical intent. That means designing AI with measures to transparently explain the impact and rationale of its actions and recommendations.
As we consider how AI can positively contribute to society, here are three pillars for building trusted AI:
Cultivate an ethics-by-design mindset
In the enterprise, ethics in AI means creating and sustaining a culture of critical thinking among employees. It’s not feasible to ask a single group to take sole responsibility for identifying ethical risks during development. Ethics-by-design requires perspectives from different cultures, ethnicities, genders and areas of expertise.
A “consequence scanning” framework (envisioning unintended outcomes of a new feature and how to mitigate harm) not only benefits the end-product, it also encourages development teams to think about their product . How might someone use it with malicious intent, ignorance, or just a completely different way than you had intended
Creating an environment that embraces input from a broader audience can help organisations eliminate blindspots that can be ripe for bias.
By offering training programmes that help employees put ethics at the core of their workflows, organisations can empower their workforces to more critically identify potential risks.
One measure, for example, is training new employees to understand their role in the process to better develop an ethics-by-design mindset.
Apply best practice through transparency
It is one thing to build AI in a lab, but another to accurately predict how the AI will perform in the real world. Throughout the product life cycle, questions of accountability should be top-of-mind.
Transparency is key, and actively sharing information with the right audiences is often critical to capturing diversified perspectives.
Collaborating with external experts – academics, industry and government leaders – can improve outcomes. Gaining feedback about how teams collect data can avoid unintended consequences of algorithms in the lab and even future real-world scenarios.
Providing as much transparency as possible around how an AI model has been built will ensure that the end user has a better sense of the safeguards in place to minimise bias.
This can be done, for example, by publishing model cards. Similar to nutrition labels, AI model cards describe the intended use and users, performance metrics, and any other ethical consideration. This will help build trust among prospective and existing customers, regulators and wider society.
To trust AI, people need to understand why AI makes certain recommendations or predictions. AI users approach these technologies with different levels of knowledge and expertise. To inspire confidence and avoid confusion, teams need to understand how to communicate these themes and explanations appropriately for different users.
Empower customers to make ethical choices
Ethics doesn’t stop after development. Where developers provide AI platforms, AI users effectively own and are responsible for their data. And, while developers can provide customers with training and resources to help identify bias and mitigate against harm, if retrained inadequately or left unchecked, algorithms can perpetuate harmful stereotypes.
This is why it is important that organisations provide customers and users with the right tools to use technologies safely and responsibly, to know how to identify problems and address them.
What if you could allow users to indicate that certain information fields are “sensitive”, for example? In many cases, regulatory restrictions apply to the use of information relating to age, race or gender as these data fields can introduce bias to a model.
By highlighting data that is highly correlated with these fields – what we call “proxy variables” – products can flag potentially concerning or biased data fields to the administrator’s attention.
With appropriate guidance and training, customers will better understand the impact of deciding whether or not to exclude sensitive fields and the ‘proxies’ from their model.
Embedding values will benefit everyone
Infusing ethics is not a linear process. It involves a cultural shift, evolving processes, boosting engagement with employees, customers and stakeholders, and equipping users with the tools and knowledge to use technology responsibly.
If we can collectively build upon these three core pillars, we can be certain that AI will be designed and deployed with greater accountability and transparency, thus democratising the benefits of AI across wider society.