Generative AI tools, like ChatGPT, can massively improve an enterprises’ productivity and competitiveness but only if we have a realistic view of the technology’s risks and limitations.

By Michael Langeveld, chief technology officer at Hewlett Packard Enterprise (HPE) South Africa

Chatbots have long been considered one of the most promising applications of artificial intelligence (AI). By enabling AI at scale, a bot like ChatGPT can dramatically accelerate the training of large language models – neural networks with several hundred billion parameters – to create what is today called generative AI.

Current models not only enable conversations in natural language, but they can also do everything from writing scientific papers and hacking instructions, to finding bugs in code and creating pictures in the style of Vincent van Gogh.

But there are a multitude of practical, legal and ethical problems that need to be considered. This includes the discovery that these machines can make mistakes, they can lie with a poker face and their judgments can be biased.

Towards a general enterprise intelligence

Many of the current experiments with generative AI showcase the incredible potential this technology holds to optimise enterprises’ business processes, increase their productivity and strengthen their competitive advantage.

In practice, this could include the use of a classic chatbot to improve customer service, to answer questions from the legal or R&D department or to generate step-by-step instructions for troubleshooting a faulty production machine.

But this is only the first step on the AI journey. In the future, an AI chatbot could be able to provide an answer to virtually any question, such as the current status of a product launch, relevant changes in tax law, or the appropriate response to geopolitical events.

Generative AI: only the tip of the iceberg

Generative AI initiatives in the enterprise will typically start with experiments, pilots and proofs of concept. But if the goal is to move from pilot to production at scale, there are a number of strategic, organisational and technical prerequisites and dependencies that must be considered right from the start. These include:

* Data maturity level: A generative AI initiative will only survive and scale if a company has reached a certain data maturity level – i.e. strategic, organisational and technical capabilities that enable it to create value from data using AI.

* Data architecture and governance: If an AI chatbot is to be used for company-specific use cases, it must be continuously trained with data from your own company. Hence it relies on the availability of this data in sufficient quantity and quality. When it comes to scaling the chatbot deployment, consistent, company-wide data architecture and governance is required.

* Hybrid platform approach: Model training and inference can run on centralised AI supercomputers operated by the large language model providers (e.g. OpenAI, Aleph Alpha, Google) but there are various reasons why, in the long run, companies will have to establish a hybrid or edge-to-cloud platform approach.

* Digital sovereignty: It’s highly likely that the market for large language models will be dominated by a small handful of providers worldwide. This makes conversations around digital sovereignty important – that is, the reduction of dependencies and the protection of intellectual property.

* Process integration: When planning AI applications, organisations must integrate them into existing operational and technical processes. Relevant processes include application and data lifecycle management, security, operational planning and control processes, operational safety and risk management.

Start or wait?

According to Gartner’s latest AI hype cycle, which was published before ChatGPT went online, generative AI is sitting before the peak of inflated expectations. Assuming that we’ve now reached the peak, we can soon expect a period of disappointments and doubts around whether or not AI will really live up to our expectations. Gartner predicts that this plateau of productivity to be reached within two to five years.

So should you start now or wait? It depends on your innovation strategy. Companies that want to increase their competitiveness through continuous innovation should definitely start now. But the hype should not obscure the fact that the use of AI chatbots in the enterprise – like any AI deployment – is very complex. It requires planning, preparation, knowhow, training and continuous development if it is to scale and deliver sustainable productivity.