The industry is abuzz with talk of storage virtualisation, with expectations being created around universal management of multi-vendor storage and various other promises to provide the answers to storage challenges. 


In an attempt to deliver on these expectations, there has been a lot of confusion, debate and rushed-to-market solutions, writes Mark Lewis, executive vice-president at EMC Software.
Much of the discussion around storage virtualisation centres on increasing the utilisation of storage assets.
However, there are storage resource management tools which allow users to monitor their entire storage infrastructure and proactively manage its utilisation. As a result, asset utilisation is a less pressing problem today.
Meanwhile, the data explosion continues apace. The new view of data sees its value fluctuating, mandating different locations and different types of containers throughout its life. This gives a whole new meaning to the concept of data availability.
For most organisations, this means constructing complex IT infrastructures that can deliver the right information to the right place at the right time, 24 hours a day, seven days a week.
Businesses require continuous information availability and are unwilling and unable to endure large amounts of infrastructure downtime when delivering on the needs of the business.
Downtime costs can run into the hundreds of thousands or even millions of dollars per hour. From running a physically-contained data centre, we are rapidly moving to running IT operations as if they were a permanently earth-orbiting space station with in-flight fuelling, maintenance and upgrades. IT managers have come to expect better tools for keeping their data continuously available.
Just 10 years ago, we asked: “What if we eliminated unplanned downtime?”. Technologies such as business recovery (clustering) and data replication tools such as EMC’s SRDF have since succeeded in protecting data from outages in the case of hardware failures, disasters and other unpredictable events.
With the right tools in place and with high levels of availability designed into new IT infrastructure projects, particularly for mission-critical applications, many data centres have succeeded in drastically minimising their unplanned downtime.
However, with the rapid increase in complexity of IT operations, the demand is also increasing for greater flexibility and responsiveness. IT has become critical in reducing delays in business processes and applications are now expected to be available 24×7 to more people than ever before, in more locations around the world.
So, today, the question becomes: “What if we eliminated planned downtime?”. Maintenance, application upgrades, physical changes to the infrastructure, data centre migrations – they all reduce the time an application can be up and running. IT managers report that planned downtime accounts for between two-thirds and three-quarters of total downtime.
IT managers who have networked their storage in recent years are now looking to move production data non-disruptively across the storage infrastructure without ever taking applications down.
For instance, when introducing a new storage array into an existing SAN, a storage administrator would need to schedule this at a suitable time and ensure application owners are aware that data will be unavailable for a period of time. IT managers want to eliminate all storage-related downtime for any reason whatsoever.
They also want to centralise capacity allocation, provisioning and data movement capabilities in order to provide for more flexibility in multi-tiered, multi-vendor storage environments.
To execute on information lifecycle management (ILM), for instance, if a volume in a medium performance storage pool is not meeting the specific service level guaranteed to the application owner, the storage administrator moves the data to a higher performing pool (most likely on a different tier of storage) to meet the requirement.
In the future, we can envision this process being more automated based on policies set by the application owner.
More often than not, storage administrators today are handcuffed from making changes within the storage infrastructure. The business’ intolerance for downtime, as well as the upstream impact on server and application administrators, stands in the way of everything from fine-tuning to much larger-scale moves and changes.
If properly deployed, storage virtualisation can give administrators the flexibility they need to make changes to the underlying infrastructure without impacting systems or applications.
These “non-disruptive operations” mask the complexity of the IT infrastructure from the business user. Delivering this capability into today’s storage infrastructures is the practical use case that will spur the broad acceptance and adoption of storage virtualisation technology.
But to be successful in achieving this new level of non-disruptive operations, users must choose a storage virtualisation solution that will not increase the complexity and cost of their storage infrastructure, while also retaining ease-of-deployment and protecting their existing investments in storage functionality.
Finally, virtualisation can simplify the management paradigm by increasing flexibility within the infrastructure and enabling IT to manage the infrastructure more independently from application or line of business involvement.
While, for example, each storage array and each server has its own methodology for configuring volumes, over time storage virtualisation will provide storage administrators a single and independent way to perform these functions in a unified way from within the network.
Architectural considerations
Today’s in-band virtualisation solutions fall short of meeting these business needs. In-band virtualisation introduces an appliance or device “in the data path”.
There are inherent limitations to this approach in the areas of complexity, risk, investment protection, scalability and ease of deployment.
The in-band approach requires users to put all of their eggs in a very small basket. Instead of more flexibility, they become constrained by a difficult-to-implement, non-scalable solution.
It is also an all-or-nothing approach – all the data in the SAN has to be virtualised and pass through the virtual layer.
Historically, we have seen that any practical solution needs to adhere to certain principles: It must solve a specific problem (such as disruptions to IT operations); not create new problems (like data integrity and increased complexity); and embrace the existing environment (it must retain all the current value and functionality of the infrastructure).
Another approach to storage virtualisation, network-based virtualisation (or the out-of-band approach), is guided by these principles.
Network-based virtualisation leverages the existing SAN infrastructure by employing the next generation of intelligent switch or director technology.
The implementation in the switch utilises an open and standard approach. Developing such a technology in an open manner means that storage vendors need to work as a team with switch vendors to ensure seamless functioning of a heterogeneous environment.
This means developing standard interfaces and providing customers with a choice of switches from multiple suppliers.
An additional element that is added is a management appliance that sits “outside the data path” and is primarily focused on managing the overall virtual environment, including mapping the location of the data.
No “state” or version of the data is ever held in the network. Until the data is properly stored on the array, the application is not made to believe that the job has been completed.
In terms of scalability, a networked storage virtualisation solution should be able to support multiple enterprise-class arrays; aggregate a range of intelligent high-end and mid-tier arrays while providing complementary functionality such as non-disruptive migration.
While it guarantees scalability, it also provides IT managers with a granular, gradual approach, allowing them to virtualise only some of the data on the SAN.
It makes it possible to select, volume-by-volume, which volumes need to be virtual and which ones don’t.
With a system that scales up, an organisation can start small with the assurance that, as it grows, the solution can scale-up, across the infrastructure.
In addition, this approach to storage virtualisation protects investments in value-added functionality already on the storage array. These solutions allow the user to continue using existing array-based replication technologies.
In summary, a distributed, open, network-based approach to storage virtualisation finally delivers the practical value that users have sought from this technology.
No longer a technology in search of a problem, network-based storage virtualisation promises to provide an unprecedented level of control over the infrastructure.
It will meet the growing need for complete non-disruptive operations, help to simplify and optimize the management of networked storage, and enhance the overall flexibility required to operate within today’s highly complex environments.