By Ronak Singhal – Intel has contributed the Scalable I/O Virtualisation (SIOV) specification to the Open Compute Project (OCP) with Microsoft.

This enables device and platform manufacturers access to an industry standard specification for hyperscale virtualisation of PCI Express and Compute Express Link devices in cloud servers. When adopted, SIOV architecture will enable data center operators to deliver more cost-effective access to high-performance

accelerators and other key I/O devices for their customers, as well as relieve I/O device manufacturers of cost and programming burdens imposed under previous standards.

The new SIOV specification is a modernized hardware and software architecture that enables efficient, mass-scale virtualisation of I/O devices, and overcomes the scaling limitations of prior I/O virtualisation technologies.

Under the terms of the OCP contribution, any company can adopt SIOV technology and incorporate it into their products under an open, zero-cost license.

In cloud environments, I/O devices including network adaptors, GPUs and storage controllers are shared among many virtualised workloads requiring their services. Hardware-assisted I/O virtualisation technologies enable efficient routing of I/O traffic from the workloads through the virtualisation software stack to the devices. It helps keep overhead low and performance close to “bare-metal” speeds.

I/O Virtualisation needs to evolve from enterprise scale to hyperscale

The first I/O virtualisation specification, Single-Root I/O Virtualisation (SR-IOV), was released more than a decade ago and conceived for the virtualised environments of that era, generally fewer than 20 virtualised workloads per server.

SR-IOV loaded much of the virtualisation and management logic onto the PCIe devices, which increased device complexity and reduced the I/O management flexibility of the virtualisation stack.

In the ensuing years, CPU core counts grew, virtualisation stacks matured, and container and microservices technology exponentially increased workload density.

As we transition from “enterprise scale” to “hyperscale” virtualisation, it’s clear that I/O virtualisation must evolve, as well.

SIOV is hardware-assisted I/O virtualisation designed for the hyperscale era, with the potential to support thousands of virtualised workloads per server.

SIOV moves the non-performance-critical virtualisation and management logic off the PCIe device and into the virtualisation stack. It also uses a new scalable identifier on the device, called the PCIe Process Address Space ID, to address the workloads’ memory.

Virtualised I/O devices become much more configurable and scalable while delivering near-native performance to each VM, container or microservice it simultaneously supports.

These improvements can reduce cost on the devices, more efficiently provide device access for large numbers of VMs and containers, and provide more flexibility to the virtualisation stack for provisioning and composability.

SIOV gives strained data centres an efficient path to deliver high-performance I/O and acceleration for advanced AI, networking, analytics and other demanding virtual workloads shaping our digital world.

* Ronak Singhal is an Intel Senior Fellow and chief architect for Intel Xeon Roadmap & Technology Leadership