There is a clear incentive to accelerate AI transformation, and rightly so. Statistics show that the proportion of a company’s revenue, “AI-influenced”, more than doubled between 2018 and 2021 and is expected to triple by 2024.

By Hina Patel, applied intelligence lead and Ntsako Baloyi, data and applied intelligence lead for Accenture in Africa

These successes can be attributed to “AI Achievers” – organisations that are already outperforming their peers in terms of AI-influenced customer experience (CX) and Environmental, Social, and Governance (ESG) metrics.

What are these Achievers doing right? They are responsible by design, which among other success factors, has a combined impact on business results.

However, the vast majority (94%) are still struggling to operationalise across all critical aspects of responsible AI. Our recent Accenture report, The Art of AI Maturity, offers recommendations on how to walk the AI journey well.

The roadblocks to readiness

The most significant barrier is the complexity of scaling AI responsibly, which involves multiple stakeholders and spans the entire enterprise and ecosystem.

According to our survey, nearly 70% of respondents do not have a fully operationalised and integrated Responsible AI Governance Model. As they emerge, new requirements must be baked into product development processes and linked to other regulatory areas such as privacy, data security, and content.

Furthermore, organisations may be unsure of what to do while waiting for regulations to be defined. This lack of clarity can result in strategic paralysis as businesses adopt a “wait and see” attitude. Such companies have little choice but to be compliance-focused, prioritising specific requirements over underlying risk, leading to problems down the road and value loss.

Secondly, risk management frameworks are required for all AI but are not universal. Although a risk management framework is needed to develop AI responsibly, only about 47% of the organisations surveyed have done so.

The proposed EU AI Act defines various categories of AI risk, primarily determined by the use case or context. Even for those who have created a framework, the challenge lies in sustainably applying it across the organisation. Whether or not AI is responsible cannot be judged at a single time. It has to be continuously checked.

Yet our survey found that 70% of organisations have yet to implement the ongoing monitoring and controls required to mitigate AI risks.

Thirdly, you’re only as strong as your weakest partner. Therefore, companies must consider their entire AI value chain (focusing on high-risk systems) rather than just the proprietary elements. About 39% of respondents see collaborations with partners as one of their most significant internal challenges to regulatory compliance.

Only 12% have included Responsible AI competency requirements in supplier agreements with third-party providers. Understanding respective roles and responsibilities in collaborative ecosystems and platforms will become increasingly important – and complex to navigate. It is all the more important as the trend for organisations to be both users and providers of AI is set to increase over the next three years.

Culture is key, but talent is scarce. Every employee must understand and believe in the firm approach the organisation is taking to ensure the responsible use of AI.

According to the survey results, most organisations have a long way to go before they reach that goal. More than half of organisations do not yet have specific roles for Responsible AI embedded throughout the organisation. With data science talent in short supply, organisations must consider how to attract or develop the specialist skills required for Responsible AI roles while also keeping in mind that teams responsible for AI systems should reflect a diversity of geography, backgrounds, and ‘lived experience’.

Their diverse perspectives are critical for identifying potential bias and unfairness and minimising unconscious bias in product design, construction, and testing. Regarding AI, culture is crucial in uniting the entire organisation around responsible principles and practices.

Finally, measurement is critical, but non-traditional KPIs define success. AI cannot be measured solely by traditional KPIs such as revenue generation or efficiency gains, although organisations frequently rely on these conventional benchmarks and KPIs.

Organisations cannot be confident that a system is fair unless established technical methods for measuring and mitigating risks are used. Specialised knowledge is required to define and assess the responsible use and algorithmic impact of data, models, and outcomes, such as algorithmic fairness.

While there is no set path forward, it is critical to take a proactive approach to build Responsible AI readiness to overcome or avoid the abovementioned barriers.

All roads lead to responsibility

Organisations must shift from a reactive compliance strategy to the proactive development of mature Responsible AI capabilities to be responsible by design. When the enterprise’s foundations for responsible AI use are in place, it becomes easier to adapt as new regulations emerge. Businesses can then concentrate on performance and competitive advantage.

Here are simple steps to assist businesses in becoming responsible by design:

* Risk, Policy and Control: Strengthen compliance with Responsible AI principles and current laws and regulations while monitoring future ones, develop policies to mitigate risk and operationalise those policies through a risk management framework with regular reporting and monitoring.

* Technology and Enablers: Develop tools and techniques to support Responsible AI principles such as fairness, explainability, robustness, accountability and privacy, and build these into AI systems and platforms.

* Culture and Training: Empower leadership to elevate Responsible AI as a critical business imperative and provide all employees with training to give them a clear understanding of Responsible AI principles and how to translate these into actions.

Almost two-thirds of those polled see AI as a critical enabler of their strategic priorities. Scaling AI can deliver high performance for customers, shareholders, and employees. Organisations must overcome common roadblocks to using AI responsibly and sustainably.

Being responsible by design can help organisations overcome these barriers and confidently scale AI. They will have the foundations to adapt as new regulations and guidance emerge if they shift from a reactive AI compliance strategy to the proactive development of mature, Responsible AI capabilities. That way, businesses can focus more on performance and competitive advantage.