Consolidating workloads is a tried and tested approach to improving operational efficiency.

By Hayden Sadler, country manager for South Africa at Infinidat

A successful consolidation strategy can have both economic and administrative benefits. However, it can also increase the size of the fault domain, which is the collection of workloads that have a single point of failure.

The more workloads that are consolidated, the bigger the benefits but also the larger the impact of a failure. To mitigate this risk, the underlying storage architecture needs to be addressed with an intelligent approach, to ensure 100% guaranteed data availability along with other benefits.

The ever-present efficiency challenge

The IT industry is constantly juggling the challenge of making infrastructure as efficient and cost effective as possible, while also ensuring that performance, availability and agility needs are met. Workload consolidation is one strategy that many enterprises have adopted in order to achieve this, and according to IDC, it potentially offers a host of benefits.

These include more efficient data sharing, centralised management leading to higher productivity, and a simplified environment with fewer storage vendors to manage. In addition, as economies of scale kick in through denser infrastructure, costs are reduced, both directly and indirectly through lower energy consumption and reduced storage footprint.

However, legacy architecture frequently does not support a consolidated workload strategy, which can negatively impact performance. In such an environment, if one workload requirement spikes, other workloads could be affected. Furthermore, maintenance and upgrades can cause downtime, which means that service requirements cannot be met. Finally, the larger and more densely consolidated a system, the larger the impact of a failure, and the longer it takes to recover. Catastrophic failure could potentially take down an entire system or data centre.

An intelligent architecture is the answer

Traditional storage typically deploys N+1 architecture, which is the live component plus one redundant system for failover. This is problematic for consolidated workloads for several reasons. Such an architecture leaves organisations exposed when their component infrastructure is offline for any reason. For example, during an upgrade. If, during such a time, a component failure occurs on the redundant system, or any such other unforeseen event, access to data is lost. The cost implications of such a failure are massive, not to mention lost revenue and reputational damage resulting from downtime.

Organisations need to deploy an N+2 architecture, which features triple redundancy to dramatically reduce the risk of outright failure. In a triple redundancy environment, there is always a redundant backup, even if one system is offline. However, simply implementing an additional redundant system is not sufficient. Recovery time is critical, so enterprises need automatic failover and fail back. Active-active clusters are also critical, so that redundant systems are always online, allowing for seamless transition between systems without disrupting data access.

Effectively maintaining a dense workload consolidation environment

While consolidated workloads typically have lower failure rates, they have larger fault domains. The more workloads brought together, the greater the impact when a failure occurs. The final consideration is to leverage a system that guaranteed 100% data availability and uptime. Importantly, it should also enable non-disruptive upgrades.

A software-defined approach to storage design can assist with accommodating evolving technology such as consolidated workloads, while maintaining cost efficiency through elastic pricing, a flexible financial model that combines CapEx and OpEx, and industry-standard hardware. This is where intelligence comes in to deliver high performance on commodity hardware. Solutions need to offer consistent performance against varying Input/Output (I/O) profiles, with multitenant management to ensure consistent quality of service for all workloads regardless of their capabilities. This is critical to maximising performance across all areas.

Out with the old

To leverage storage workload consolidation effectively, enterprises need a vendor with a proven track record of delivering intelligent, cost effective storage at multi-petabyte scale. They also need to ensure they implement a solution that enables elastic pricing, where costs are not directly linked to the storage media.

Consolidated workloads bring together multiple legacy storage arrays on a single platform. This has many benefits, as discussed above. However, there are specific considerations that organisations need to bear in mind, as the capabilities of the storage architecture are of the utmost importance. This includes performance, availability, functionality, and affordability requirements, to cost-effectively consolidate different types of workloads with different I/O profiles onto a single system.