Organisations need to urgently prioritise responsible AI to not only adhere to emerging regulations, but also to foster trust and inclusivity in AI-driven innovations.

This is among the headline findings of a Qlik-sponsored study by TechTarget’s Enterprise Strategy Group (ESG) to shine a light on the state of responsible AI practices across industries. The study delves into the pressing need for robust ethical frameworks, transparent AI operations, and cross-industry collaboration to navigate the complexities of AI integration into business processes.

The ESG research report reveals insightful data on the adoption, challenges, and strategic initiatives surrounding responsible AI:

* Widespread adoption of AI technologies: An overwhelming 97% of surveyed organisations are actively engaging with AI, with a significant portion (74%) already incorporating generative AI technologies in production. This marks a notable shift towards AI-driven operations across sectors.

* Investment versus strategy gap: While all respondents acknowledge active investments in AI, a stark 61% are dedicating a substantial budget towards these technologies. However, there’s a notable discrepancy in strategic planning with 74% of organisations admitting they still lack a comprehensive, organisation-wide approach to responsible AI.

* Challenges in ethical AI practices: The report highlights several key challenges faced by organisations. A significant 86% face challenges with ensuring transparency and explainability in AI systems, pointing to a critical need for solutions that demystify AI processes. In addition, almost all organisations (99%) face hurdles in staying compliant with AI regulations and standards, underscoring the complex regulatory landscape surrounding AI technologies.

* Operational impact and prioritisation of responsible AI: Despite the challenges, a robust 74% of organisations rate responsible AI as a top priority, signalling a growing recognition of its importance. Yet, over a quarter of organisations have encountered increased operational costs, regulatory scrutiny, and market delays due to inadequate responsible AI measures.

* Stakeholder engagement in AI decision-making: The research emphasises a broad stakeholder landscape in the realm of responsible AI, with a significant emphasis on IT departments playing a proactive role. This highlights the necessity for inclusive and collaborative approaches in ethical AI deployment and governance.

Brendan Grady, GM: analytics business unit at Qlik, says: “The ESG Research echoes our stance that the essence of AI adoption lies beyond technology – it’s about ensuring a solid data foundation for decision-making and innovation. This study underscores the importance of responsible AI integration as a catalyst for sustainable and impactful organisational growth.”

Michael Leone, principal analyst at ESG, comments: “Our research confirms the growing adoption of AI across industries, but it also highlights a gap in effectively implementing responsible AI practices. As organisations accelerate their AI initiatives, the necessity for a solid foundation that supports ethical guidelines and robust data governance becomes crucial.”