Kathy Gibson reports from Tognana, Italy – The data centre that you knew in the past is outdated – the AI factory is what will drive innovation going forward.

Karsten Winther, president: Europe, Middle East and Africa at Vertiv, explains that the data centre of the future will deliver the processing and data required to enable new technologies and innovations.

But, while the GPUs driving new Ai factories, the infrastructure including networking, power and cooling is what enables the GPUs to perform.

“Everyone is claiming a partnerships with Nvidia, but I like to think ours is a little bit different,” Winther says. “Vertiv’s cooling and power technologies are vital to enable the compute; and the data centres we are talking about now would not be possible without companies like Vertiv.”

Nvidia is delivering tremendous and and unprecedented advances in GPU technology, but in turn this is creating challenges in dealing with the power, cooling and technologies to draw heat away.

“We need to keep up developing and delivering technologies that can collaborate with that compute power,” Winther explains. “That requires a lot of collaboration on R&D, and also joinr approaches with key governments and clients.”

Perhaps the biggest challenge for new data centre builds is the speed of development, which could render a facility obsolete almost before it’s completed.

“A new data centre opening today could have been planned five or six years ago, before AI was an issue. So we need to deploy solutions at speed.”

Indeed, over the last year, data centres have seen 24% data growth, 16% server growth and an 8% to 9% growth in infrastructure capital expenditure (capex). At the same time, around 100GW of power capacity has been added between 2023 and 2025, almost doubling every year.

These AI factories are demanding massive investments to unlock value: industry-wide investments of $350billion have been seen this year, with another $2-trillion planned before the end of 2030.

“On the technology side, we are seeing an acceleration in compute rack and POD density,” Winther explains. “Hopper was 70kW, Blackwell is almost double that, and by 2028 it will be 10-times what Hopper was. To get there, we have overcome tremendous challenges in terms of power inefficiencies and heat generated.”

The unit of compute has changed over the last year or two, he adds. “It is no longer the chip – it is the system, the factory. It is the complete 400MW data centre campus, designed to work together as a single system. Densification at rack, room and site level is required to enable the GPU performance roadmap.”

The GPU, at the heart of this integrated systems, requires the whole complex to manage electrical input, heat and networking.

“To ensure we stay ahead of the game, we need to be thinking and planning multiple generations ahead of the GPU announcements,” Winther adds. “And we need to be able to deliver the system as one module so it can be brought to life quickly.”

Modularisation is the key to doing this, with modular data centre blocks including pre-engineered modules pre-integrated with power, cooling and control blocks that is validated before it leaves the factory, scalabe and reconfigurable. And it’s embedded with digital intelligence from design to operation.

“This enables us to speed deployment,” Winther says. “It means the client has a much shorter time from investment to capitalising on the assets.”

Vertiv delivers modular systems include the power train, the thermal chain and hybrid cooling technologies.

The company takes its responsibility as a data centre enabler seriously. “We believe the world depends on us to be able to manage the data we power and cool. And we spend a lot of time ensuring that the world can benefit from AI factories.”

To this end, Vertiv has increased its manufacturing capacity across Europe and the Americas, while opening customer experience centres across the globe.