Meaningful artificial intelligence (AI) deployments are just beginning to take place, according to Gartner.
Gartner’s 2018 CIO Agenda Survey shows that 4% of CIOs have implemented AI, while a further 46% have developed plans to do so.
“Despite huge levels of interest in AI technologies, current implementations remain at quite low levels,” says Whit Andrews, research vice-president and distinguished analyst at Gartner. “However, there is potential for strong growth as CIOs begin piloting AI programmes through a combination of buy, build and outsource efforts.”
As with most emerging or unfamiliar technologies, early adopters are facing many obstacles to the progress of AI in their organisations. Gartner analysts have identified the following four lessons that have emerged from these early AI projects.
Aim low at first
“Don’t fall into the trap of primarily seeking hard outcomes, such as direct financial gains, with AI projects,” says Andrews. “In general, it’s best to start AI projects with a small scope and aim for ‘soft’ outcomes, such as process improvements, customer satisfaction or financial benchmarking.
“Expect AI projects to produce, at best, lessons that will help with subsequent, larger experiments, pilots and implementations. In some organisations, a financial target will be a requirement to start the project. “In this situation, set the target as low as possible.
“Think of targets in the thousands or tens of thousands of dollars, understand what you’re trying to accomplish on a small scale, and only then pursue more-dramatic benefits.”
Focus on augmenting people, not replacing them
Big technological advances are often historically associated with a reduction in staff head count. While reducing labour costs is attractive to business executives, it is likely to create resistance from those whose jobs appear to be at risk. In pursuing this way of thinking, organisations can miss out on real opportunities to use the technology effectively.
“We advise our clients that the most transformational benefits of AI in the near term will arise from using it to enable employees to pursue higher-value activities,” Andrews adds.
Gartner predicts that by 2020, 20% of organisations will dedicate workers to monitoring and guiding neural networks.
“Leave behind notions of vast teams of infinitely duplicable ‘smart agents’ able to execute tasks just like humans,” says Andrews. “It will be far more productive to engage with workers on the front line. Get them excited and engaged with the idea that AI-powered decision support can enhance and elevate the work they do every day.”
Plan for knowledge transfer
Conversations with Gartner clients reveal that most organisations aren’t well-prepared for implementing AI. Specifically, they lack internal skills in data science and plan to rely to a high degree on external providers to fill the gap. Fifty-three percent of organisations in the CIO survey rated their own ability to mine and exploit data as “limited” — the lowest level.
Gartner predicts that through 2022, 85% of AI projects will deliver erroneous outcomes due to bias in data, algorithms or the teams responsible for managing them.
“Data is the fuel for AI, so organisations need to prepare now to store and manage even larger amounts of data for AI initiatives,” says Jim Hare, research vice president at Gartner. “Relying mostly on external suppliers for these skills is not an ideal long-term solution. Therefore, ensure that early AI projects help transfer knowledge from external experts to your employees, and build up your organisation’s in-house capabilities before moving on to large-scale projects.”
Choose transparent AI solutions
AI projects will often involve software or systems from external service providers. It’s important that some insight into how decisions are reached is built into any service agreement.
“Whether an AI system produces the right answer is not the only concern,” says Andrews. “Executives need to understand why it is effective, and offer insights into its reasoning when it’s not.”
Although it may not always be possible to explain all the details of an advanced analytical model, such as a deep neural network, it’s important to at least offer some kind of visualisation of the potential choices. In fact, in situations where decisions are subject to regulation and auditing, it may be a legal requirement to provide this kind of transparency.