Kathy Gibson is at Mobile World Congress 2019 in Barcelona – Big data, analytics, artificial intelligence (AI) or other cognitive technologies are at the heart of the future smart enterprise.

But enterprises are increasingly aware of the fact that AI could deepen the digital divide while simultaneously reinforcing bias.

Beth Smith, GM for Watson AI Data, points out that democratising AI and eliminating bias are big issues, and it’s important that organisations get on top of them so that businesses can achieve true value from AI.

“We see AI everywhere, but so many people still wonder just what it is, or see it as some kind of magic,” Smith says. “While there is some real value, no doubt, and it does some amazing things, AI at its core is not magic.”

In fact, Smith says, AI is a bit like electricity. Back when electricity was invented, people thought it was a type of magic. One hundred years later, it became democratised with the invention of the lightbulb.

Where we are today is that people can recognise the potential of AI, but very few are actually using it.

There are three main inhibitors when it comes to AI adoption, Smith adds. They are trust, relevance and skills.

“Trust has become today’s competitive differentiator,” she adds. In fact, a recent study shows that 60% of C-suite executives say what’s holding them back from IA is trust.

“So while it’s a topic for competitive differentiation, what is imperative is the scaling and democratisation of AI.”

Trust boils down to four elements: fairness, explainability, adversarial robustness and transparency.

Fairness boils down to bias, Smith points out, and is highlighted in cases where AI systems are seen to display bias against people of different races or genders.

“It is important for us to focus on this and there are a number of tools that exist to take the bias out of algorithms,” Smith says.

“But we also need to understand bias in runtime applications – and have the ability to mitigate that.”

Explainability is key to making the systems easy to understand, she adds. This means the system needs to be able to explain why a decision was reached.

Robustness is important so that systems are not as brittle as early AI deployments have proven to be.

Meanwhile, transparency is about auditability and traceability. “When a decision is made can you trace back to what model was used in making that decision?” Smith asks.

To be relevant, AI needs to be aligned to the organisation’s own business landscape, she adds. “That’s how we democratise it: when it is where your business processes are.”

Skills are crtitical, she adds. “It will never fly if its only computer scientists who can do the work,” Smith says.

This means tools and processes need to be developed that allow a wider range of people to use AI systems.

“It is important to invest and enable these skills throughout the organisation,” Smith adds.

Roger Taylor, chair of the UK Centre of Data Ethics and Innovation, agrees that eliminating bias is key for the successful deployment of AI systems.

“Ethics in AI has become a central issue that business is wrestling with,” he says. Indeed, a survey shows that 85% of organisations cite ethics as central to their business decisions.

A lot of the reason for this is that AI is becoming central to every part of our lives, Taylor adds. “And that brings risks with it.”

For instance, human resources (HR) departments now routinely use AI widely in corporate systems. If a system proves to be biased, the company could end up in court, Taylor points out.

He points out that algorithmic bias is an existing problem in a new guise. “We already have laws that make it illegal to discriminate against people. The problem is, do we have tools to make sure we enforce these laws in our systems.”

He points to the US-based Compass (Correctional Offender Management Profiler for Alternative Sanctions), which was believed to discriminate against black prisoners when it came to bail or parole decisions.

“It was pointed out that it wasn’t much better than humans making decisions – although it was a lot cheaper,” Taylor says. “The thing that really caused anxiety was when Pro Publica published research that indicated the system was racist.”

Further analysis indicated that Compass didn’t use race as a factor in deciding on parole, but some of the data that was included could have been biased against black or poor offenders.

Taylor says this highlights a crucial point about AI, which is that data is never neutral. “It is always socially created,” he says. “Mathematics might be neutral, but data isn’t – it is always created through an interaction of people with systems.”
This means that products and models need to be understood statistically and socially; and the explanation and understanding of these systems cannot be limited to technical experts.

“What matters is how we interpret data and how we use it; but what really matters is the consequences of the outcomes.”