Kathy Gibson reports – Artificial intelligence (AI) cannot function effectively without the trust of all an organisation’s stakeholders – customers, vendors, employees and staff.
And, while there are any number of high-level principles expounded around AI, there is a need for practical guidelines in how to implement responsible AI (RAI), according to Boston Consulting Group (BCG)
Sylvain Duranton, MD and senior partner at BCG, and global head of BCG GAMMA, says RAI is about developing and operating AI systems that integrate human empathy, creativity, and care to ensure that they work in the service of good while achieving transformative business impacts.
What this means in practice, is that the goals and outcomes of AI systems need to be fair, unbiased and explainable.
Secure AI systems also need to be safe and robust, while following best practices for data governance to preserve user privacy.
The AI systems must create a minimum of social and environmental impacts; and must augment rather than replace human activity.
BCG undertook a survey to establish what the level of maturity for responsible AI organisations around the world are.
Steven Mills, MD and partner at BCG, and chief AI ethics officer of BCG GAMMA, says the initial survey looked to assess the level of maturity on seven dimensions:
- Transparency and explainability;
- Fairness and equity;
- Safety, security and robustness;
- Data and privacy governance;
- Social and environmental impact; and
- Human + AI.
The survey identified a number of key themes and insights, including that there are four levels of maturity, from lagging to leading.
In addition, it became clear from the study that perceptions differ from reality, with many organisations believing that they are more mature then they are in reality. Only 46% of respondents accurately estimated their progress, and 54% overestimated where they are on the maturity scale
There is a significant maturity gap between AI and RAI, with fewer than half of organisations that have achieved AI at scale having fully implemented RAI. This demonstrates that there is a significant gap, says Duranton, with a lot of work still required.
The main driver for RAI is business benefit, with organisations at a higher level of maturity tending to see more business benefits. A massive 42% of respondents cited business benefits as the reason behind their RAI programmes; 20% did it to meet customer expectations; 16% for risk mitigation; 14% to achieve regulatory compliance; and 6% for social responsibility
Not all RAI is equal: the most mature elements are data security and privacy, safety and security robustness, transparency and explainability, and accountability.
Fairness and equity as well as human plus AI are the most difficult to address, so investment in these areas often lag behind.
Where a company is has a greater bearing on its RAI maturity than the industry it operates in.
Organisations in Europa and North America have the highest RAI maturity, while industries – energy, financial services, industrial goods, automation, healthcare, public sector and consumer – are so close as to statistically identical.
Duranton explains what the survey results mean for organisations:
Real value is at stake – capture it
- Leading organisations see RAI as a source of value
- 42% of orgs are primarily motivated by business benefits, only 16% by risk mitigation
- Ignoring RAI overlooks this huge upside potential
Invest proactively to stay ahead of the competition
- The more complex dimensions of RAI take time to mature
- 20% of organisatiosn are already leading, which is a source of competitive advantage
- Organisations need to invest now to get ahead and stay ahead
Be realistic about your starting point
- 55% of all organisations over-estimated their maturity
- Understanding the gaps is critical to knowing where to invest
- A clear view of strengths reveals how to accelerate the journey