Kathy Gibson reports from Tognana, Italy – Nations that want to succeed in the age of artificial intelligence (AI) must ensure they have the infrastructure in-country to support AI sovereignty.
Carlo Ruiz, vice-president: AI solutions and operations EMEA at Nvidia, tells delegate to Vertiv Week taking place here that technology is moving faster than ever, driving new project and initiatives at scales never imagined before.
Today, sovereign AI is assuming new significance, with nations looking to invest in domestic AI capabilities and infrastructure across the stack.
Ruiz points out that visionary leaders, who recognise that they need to go beyond vision to execution, are critical.
“In contrast to the past, we are now seeing execution on the ground throughout Europe,” he says. “We think every nation must have the critical infrastructure to drive its own AI, with its own rules.”
He draws a parallel to the original Industrial revolution. While Europe led this move, some countries fell behind and it took decades to claw back.
“Now, we find ourselves in a new industrial revolution where the AI factory is needed to produce tokens, or units of intelligence to unlock innovation in science, enterprise, manufacturing and more.”
Countries that have this intelligence can nurture local ecosystems and startups, helping to stimulate and retain talent, Ruiz adds.
Meanwhile, on the government side, there is an opportunity for nations to deliver additional value and engage with citizens in a different ways.
Sovereign AI also plays a role in national safety, while enabling scientific discovery.
“It is critical to have this infrastructure if countries want to play a role in the digital future.”
But an AI factory is a bit more challenging that just investing in the graphic processing units (GPUs), Ruiz points out.
“Yes, you need a GPU – but the reality is more challenging.”
GPUs must be part of a complete systems that encompasses data processing units (DPUs), networking, power and cooling.
On top of the infrastructure, the software and building blocks must be layered on top; and Nvidia spends about 80% of its R&D budgets on these tool. “It’s great to have the AI factory, but you need to produce the intelligence.
“We are enabling researchers, startups and developers, who can use the AI factory to do their own innovation without having to reinvent the wheel,” Ruiz says.
The workloads in AI factories have shifted from large-scale training to inferencing and scale, he adds.
Inferencing requires what Nvidia calls a SMART platform: scale and complexity; multi-dimensional performance; architecture and software; RoI driven by performance; and technology ecosystem with an installed base.
The speed of technology development is creating an environment with the ecosystem is so vital, he adds. Nvidia has a 12-month cycle for new GPUs, driving new speed and performance levels all the time. Blackwell, for instance, runs 5,4-times faster than Hopper. And within three years, we can expect a further 10-fold increase in speed and performance.
Next year’s GPU, Rubin, will further increase performance and reduce energy consumption.
“So it’s critical that we work with the whole ecosystem,” Ruiz says. “We are now talking about 150KW per rack; shortly it will be 300Kw. In a couple of years we will be at 1MW per rack.”
Nvidia has shifted its focus from just GPUs to data centre scale and now to integrate networking. “It is important to think about infrastructure at scale. And if you can’t plan for energy you will have different issues in the future.”