There are more storage options today than ever before – and picking the correct solution can be a problem for IT administrators.

Choices are between all-flash and hybrid options, and solutions that are software-defined or hyperconverged, says Megha Shukla, product marketing manager at Fujitsu.

“It becomes very important to understand what each solution can do for you, to help decide which you should invest in,” she says.

To do this, CIOs should look at their scalability requirements, Shukla advises.

“If storage is a standalone block it can scale up on its own. If this is what you need you would want to look at a classic three-tier architecture.”

Hybrid arrays and all-flash arrays are suitable for classic three-tier architectures

Applications and requirements also help to determine which solution administrators should opt for.

If the system is based on server virtualisation and virtual desktop (VDI), a three-tier architecture would make sense.

If cost of capacity is a concern, if there is unstructured data, content depots and online archives, historical data or analytics and media streaming an HCI or software-defined architecture would make sense

If cost of capacity is an issue, all-flash would make sense. It gives the fastest response time and lowest operational costs in terms of power, space, maintenance and administration.

If software-defined hyperscale storage is selected, this would be most effective for extreme scalability, lowest cost in terms of open source. And the ability to access object and OpenStack storage.

Hybrid disk and flash storage offers all-in-one performance that balances capacity, speed and costs in one system.

All-flash storage arrays have enjoyed a lot of hype: they are becoming mainstream now, and offer gains in performance.

A single SSD has the I/O performance of 100 15k disks, so performance is consistent.

It also has 10-times faster response times with less latency.

In addition, SSDs are six times more reliable than hard disks because they don’t have moving parts.

Savings are also significant: all-flash saves 95% of the power of a disk system; 40% less operational effort is required; the same raw capacity needs five-times less space, saving 95% on space; and up to 30% fewer servers are required for the same amount of storage.

Gartner tells us that, by 2021, 50% of data centres will use solid state arrays for latency-sensitive workloads, up from 30% today.

Given the basic mapping of an all-flash storage array, if a company’s current architecture is based on fast-performing drives, all-flash would make more sense,” says Shukla.

There may be a slightly higher initial cost, but price per capacity will reach break-even within the product lifecycle. In addition, data reduction can eliminate price gaps in the short terms. And there could be significant savings in operations and maintenance.

If CIOs are looking for a more balanced approach, with performance as well as scalability management benefits in the form of different tools; and the need to consolidate storage, then hybrid systems would possibly make more sense.

Fujitsu offers a full range of storage solutions, both hybrid and all-flash.

The new Eternus DX8900 S4 is a full flash-enabled, NVMe-accelerated, flex-scale data centre storage systems with true enterprise-class data reduction technology. It caters for storage consolidation, business data continuity, a transition to full flash and flexible data services.

“Something very prominent when we talk about data services is data reduction,” says Shukla. “We do have compression available on this product – and it is hardware-based, so there is no impact on CPU cycles.”

The product can scale up to 140 petabytes of flash capacity, up to 24 controllers, up to 300Tb NVMe cache, up to 6 000-plus drives, with hardware-accelerated compression and automated quality of service management.

All-flash is becoming mainstream, but the next technology is already approaching, says Frank Reichart, senior director: product marketing at Fujitsu.

NVMe is quickly becoming the next big things in storage.

Today’s flash arrays use the SCSI protocol via SAS connectivity, while NVMe uses PCI connectivity. It is very efficient: using 13 commands versus 400 on SCSI. If can handle up to 64 000 data streams compared to the 32 that SCSI can handle.

NVMe is still in its infancy, with just 1% market share today. However, by the end of next year it will be installed in more data centres, and is expected to hit 20% market share by 2021.

NVMe will take off because it can run parallel data streams, Reichart explains.

As new applications come to market due to digitalisation, that will require parallelism. Applications based on artificial intelligence (AI), Internet of Things (IoT), big data and machine learning will also add to the load of storage systems.

“In the mid-term you will need a lot of data streams in parallel,” Reichart explains.

However, he cautions that NVMe should be employed in the correct environment.

To truly benefit, it is recommend to deploy a 32Gbps SAN infrastructure which is NVMe-oF ready. Switches, HPAs and OS drivers should be updated for end to end performance, And check whether your applications allow parallelisation for writing and reading data.

“Of course at Fujitsu we will embrace NVMe fully,” Reichart says.

Software-defined storage (SDS) decouples data management from the hardware, using an networked architecture that is based on x986 servers.

SDS has a number of benefits: it is easy to scale; it offers an extended lifecycle with fewer migrations; and there is the potential for lower costs

Challenge are that companies need to have the skills to build their own storage; there could be hidden costs and total cost of ownership (TCO) risks; and SDS could have higher latency.

Use cases are for object or S3 storage and distributed file storage, the biggest growth is for SDS as part of a hyperconverged infrastructure.

The current hype around hyperconvergence is complicating the market, says Reichart.

With HCI, compute and storage scales together. It is growing quickly, seeing 48% CAGR between 2016 and 2021. In fact, by 2020, 20% of business critical applications will have transitioned to HCI – up from 5% today.

The main usage scenarios for HCI are large scale server virtualisation, desktop virtualisation, private cloud and branch office applications.

The advantages are that there is one management layer, with the server performing both compute and storage; and it is highly modular and scalable.

The challenges are that is less of a  fit for heterogenous storage workloads, it is more difficult to maintain quality of service data; and there is the possibility of creating an additional storage silo.