Dell EMC has developed a strategy to support a wider range of organisations going through the modernisation shift. This strategy aligns the converged approach into three basic systems: blocks, racks, and appliances.
Dell EMC pioneered the converged infrastructure market in 2009 with the Vblock System. Today, we are the undisputed market leader of these block architectures built on discrete server, network, and storage systems.
In 2015, the company announced rack-scale systems built on industry-standard servers, software defined storage, and compute technologies. Converged rack systems support enterprise and service provider scale requirements for general-purpose workloads in both traditional and next-generation cloud native applications.
Soon after, we announced hyper-converged appliances that deliver a complete virtual infrastructure experience in a compact 2U form factor. These compact units scale to dozens of nodes and are focused on the modular approach to scaling design that small to midsized enterprises, as well as remote and branch office teams, need most.
Experience demonstrates that the Dell EMC converged block, rack, and appliance family delivers tremendous value to the businesses that run them. Companies that adopt converged systems deliver applications about five times faster, provide new services four times faster, reduce downtime by about 96%, and reduce time spent on basic “keep the lights on” tasks by more than 40%.
This is not to say that converged infrastructure, on its own, will yield such savings and results. It is the job of IT to align infrastructure investments with business requirements.
Dell EMC believes that four foundational pillars, or attributes, must be in place to deliver business results via the modern data centre: flash-based; scale-out; software-defined; and cloud-enabled.
Over the years, as demand for low-latency data performance increased, storage administrators had to solve the latency problem by increasing the number of spindles in their storage solutions – simply in order to deliver sufficient IOPS at acceptable response times for increasingly growing high-performance workloads.
By reducing the number of drives required to perform greater IOPS, flash technology dramatically reduces the cost of delivering consistent and predictable low-latency performance. This significantly reduces the floor space, power consumption, and cooling requirements needed to deliver day-to-day storage services, while ensuring a super-fast digital experience that everyone expects.
Additionally, as flash prices have dropped and capacities have increased, the economics have approached an inflection point as flash drives have begun to drive capacity efficiencies as well.
As business data and capacity requirements expand, scale-out architectural design means IT can deploy systems with a low cost of entry, while providing a modular approach to scaling out the infrastructure as requirements grow.
By designing these systems to scale as a single-managed, scale-out system, IT can efficiently manage massive capacities with few resources.
This is critical when dealing with ever-expanding workloads and the need to iterate and expand services in a rapid time frame.
Thanks to software, infrastructure systems are now managed as a single, elastic resource pool. This allows IT to manage massive capacities of infrastructure in a manner that modern architects refer to as “infrastructure as code”.
This new software-defined model automates the configuration and deployment of IT services, delivering greater business agility and a more flexible, programmable approach to managing data services.
This is essential in optimising standard IT operations in order to focus greater resources on innovation.
Cloud-enabled infrastructures are fundamental to modern data centre design.
In today’s optimise-to-innovate environment, infrastructure services must be delivered via policy-driven methodologies.
To achieve true agility, speed, and efficiency, these policies must extend beyond the data center. IT must have the ability to deploy and manage information and applications – both on- and off-premise – as well as the flexibility to move those workloads back and forth as a business requires.