In a research report commissioned by Hewlett Packard Enterprise, almost half (44%) of IT leaders surveyed believe their organisations are fully set up to realise the benefits of AI – but the report reveals critical gaps in their strategies.
Issues such as lack of alignment between processes and metrics will result in consequential fragmentation in approach, which will further exacerbate delivery issues.
The report, “Architect an AI Advantage”, which surveyed more than 2 000 IT leaders from 14 countries, found that while global commitment to AI shows growing investments, businesses are overlooking key areas that will have a bearing on their ability to deliver successful AI outcomes – including low data maturity levels, possible deficiencies in their networking and compute provisioning, and vital ethics and compliance considerations.
The report also uncovered significant disconnects in both strategy and understanding that could adversely affect future return on investment (ROI).
“The report confirms what we see with businesses in South Africa: While the support and excitement for AI projects is high, there is often a lack of comprehensive understanding what these projects entail, especially on the executive level,” says President Ntuli, MD of HPE South Africa.
“Without organisation-wide alignment on a holistic approach to AI, companies may waste time and resources on initiatives that fail to deliver. However, by recognising major blind spots early on, there is significant opportunity for local organisations to develop robust strategic foundations and accelerate their AI agendas.”
Strong AI performance that impacts business outcomes depends on quality data input, but the research shows that while organisations clearly understand this – labelling data management as one of the most critical elements for AI success – their data maturity levels remain low.
Only a small percentage (7%) of organisations can run real-time data pushes/pulls to enable innovation and external data monetisation, while just 26% have set up data governance models and can run advanced analytics.
Of greater concern, fewer than 6 in 10 respondents said their organisation is completely capable of handling any of the key stages of data preparation for use in AI models – from accessing (59%) and storing (57%), to processing (55%) and recovering (51%). This discrepancy not only risks slowing down the AI model creation process, but also increases the probability the model will deliver inaccurate insights and a negative ROI.
A similar gap appeared when respondents were asked about the compute and networking requirements across the end-to-end AI lifecycle.
On the surface, confidence levels look high in this regard: 93% of IT leaders believe their network infrastructure is set up to support AI traffic, while 84% agree their systems have enough flexibility in compute capacity to support the unique demands across different stages of the AI lifecycle.
Gartner expects that GenAI will play a role in 70% of text- and data-heavy tasks by 2025, up from less than 10% in 2023, yet fewer than half of IT leaders admitted to having a full understanding of what the demands of the various AI workloads across training, tuning and inferencing might be – calling into serious question how accurately they can provision for them.
Organisations are failing to connect the dots between key areas of business, with over a quarter (28%) of IT leaders describing their organisation’s overall AI approach as “fragmented”. As evidence of this, over a third (35%) of organisations have chosen to create separate AI strategies for individual functions, while 32% are creating different sets of goals altogether.
More dangerous still, it appears that ethics and compliance are being completely overlooked, despite growing scrutiny around ethics and compliance from both consumers and regulatory bodies.
The research shows that legal/compliance (13%) and ethics (11%) were deemed by IT leaders to be the least critical for AI success. In addition, the results showed that almost 1 in 4 organisations (22%) aren’t involving legal teams in their business’s AI strategy conversations at all.
As businesses move quickly to understand the hype around AI, without proper AI ethics and compliance, businesses run the risk of exposing their proprietary data – a cornerstone for retaining their competitive edge and maintaining their brand reputation.
Among the issues, businesses lacking an AI ethics policy risk developing models that lack proper compliance and diversity standards, resulting in negative impacts to the company’s brand, loss in sales or costly fines and legal battles.
There are additional risks as well, as the quality of the outcomes from AI models is limited to the quality of the data they ingest. This is reflected in the report, which shows data maturity levels remain low. When combined with the metric that half of IT leaders admitted to having a lack of full understanding on the IT infrastructure demands across the AI lifecycle, there is an increase in the overall risk of developing ineffective models, including the impact from AI hallucinations.
Also, as the power demand to run AI models is extremely high, this can contribute to an unnecessary increase in data centre carbon emissions. These challenges lower the ROI from a company’s capital investment in AI and can further negatively impact the overall company brand.
“AI is the most data- and power-intensive workload of our time, and to effectively deliver on the promise of GenAI, solutions must be hybrid by design and built with a modern AI architecture,” said Dr Eng Lim Goh, senior vice-president for data and AI at HPE. “From training and tuning models on-premises, in a colocation or in the public cloud, to inferencing at the edge, GenAI has the potential to turn data into insights from every device on the network.
“However, businesses must carefully weigh the balance of being a first mover, and the risk of not fully understanding the gaps across the AI lifecycle, otherwise the large capital investments can end up delivering a negative ROI.”