Over the last decade, businesses have come to appreciate the network’s role beyond its connectivity offering: it has been vital in supporting the ever-evolving digital demands of
today’s workplaces – from enabling a work-from-everywhere culture to unlocking the increasing variety of IoT use cases deployed across businesses.

By Mandy Duncan, country manager for HPE Aruba Networking South Africa

Fast forward to 2025, and we’re seeing enterprise adoption of AI gaining pace rapidly, with around 80% of African CEOs prioritising investment in AI and other advanced technologies. This gives rise to an important question.

Are current enterprise networks offering effective connections, from the cloud right to the edge, in support of the growing compute-intensive AI workloads?

Well yes: 93% of global IT leaders believe their network infrastructure is already set up to support AI traffic. This was one of the findings from the Architect an AI Advantage survey that HPE recently commissioned from Sapio Research.

However, the network only ranked fifth in IT leaders’ priority investment list for supporting AI efforts. And perhaps more concerningly, when we dug a little deeper, we found that less than half of respondents admitted to fully understanding the nuanced needs for networking across the full AI lifecycle. The fact that only a small percentage of CIOs in South Africa consider technical expertise the most crucial skill for their role may further exacerbate this gap in understanding.

In overestimating their readiness, IT leaders may not yet have given their network the appropriate level of consideration or investment that it needs, which could lead to inadequate performance, security disconnects, stalled progress, and more. The network has an important role to play in long-term AI success, meaning that it’s crucial to have the right network strategy in place, especially to support activity from edge to cloud.

 

The network’s role in AI success

The role of the network will change depending on which of the three main phases – data acquisition, model training and tuning, and inferencing (where a trained AI model analyses new data to produce specific outputs) – it is supporting within the AI lifecycle.

A nuanced understanding of the network’s role across each phase of the AI lifecycle is critical if businesses are to prevent the network from becoming a bottleneck in AI processes.

 

Data acquisition phase

Data is the lifeblood of any AI project. For any AI system to make smart and accurate decisions, it first needs access to large and diverse datasets to learn from. It is therefore essential to control data to provide a much safer, more precise, and more effective AI solution. This is a mission for the network: to capture, secure, and transport data quickly and easily from any source to any destination across edge, data centre, and cloud.

But, to succeed in its mission, the network must have an integrated connectivity fabric with a single operational services interface running from edge to cloud.

Access to the edge, the point at which people and devices connect, is particularly important here because it’s where an organisation generates and consumes data, without it first needing to travel through the cloud or data centre for processing.

The edge is playing host to a growing number of IoT devices as well, and these offer AI a rich source of data for training and inferencing, which the network should support. How? By acting as an onramp for quick delivery of high-quality, insight-rich data that is captured and processed right at the source (the edge) and then transported to wherever it is needed. This might be the data lake for AI training, or it could be used operationally for AI inferencing.

 

Model training and tuning phase

Once data has been captured, it must be leveraged to train the algorithm. Training and retraining AI models on huge amounts of data is an intense activity, requiring heavy horsepower matched with low-latency, high-performance network interconnects for the hundreds or thousands of graphics processing units (GPUs) needed to complete precise calculations at high speed.

Simply put, the network needs to be up to the task of delivering optimised, power-efficient, and predictable performance for training and tuning across multitenant workload environments.

 

AI inferencing phase

This phase involves deploying newly trained AI models to wherever the business has chosen to run them (that is, on-premises, at the edge, or in the cloud) so that the fresh data that’s generated there can be analysed and acted upon quickly to unlock business value.

Inference workloads tend to need GPU acceleration for large models although some applications can rely on Central Processing Units (CPUs) alone. Either way, the network must provide proper connectivity performance; without this support, there will be issues with latency and losslessness, which will dramatically affect the model’s efficacy.

To realise the full promise of AI, enterprises must reimagine their networks as foundational enablers, optimised for performance, visibility, automation, and security at every turn.  A broad-based infrastructure, spanning wired, WiFi, IoT, and private 5G, enables seamless data collection and management.

Central visibility and control in the form of a unified edge-to-cloud network prevents silos, while automation reduces costs and boosts efficiency. Finally, integrated security protects data from the outset, ensuring trustworthy AI outputs.

As AI continues to advance, the network must evolve in tandem, becoming not only faster, but more adaptable, resilient, and secure. Meeting the surging demands of intelligent workloads requires a network infrastructure that is both flexible and ready to scale at a moment’s notice. Only by considering their network as part of their AI strategy can businesses unlock AI’s full transformative power.