This article is more than 1 year old

Seventeen hopefuls fight for the NVMe Fabric array crown

Survey shows a storage tidal wave coming our way

Comment A new phase of disruption is hitting the performance data storage array market, giving new, old, startup and struggling all-flash array vendors a shot at making it big by using NVMe flash drives and NVMe Fabric-class connectivity to provide direct-attached SSD performance from external arrays.

The charge is being led by Dell-EMC’s DSSD unit and startups such as Mangstor and Apeiron, shipping product that offers shared array data access at speeds hitherto unbelievably unattainable except from server-attached PCIe-flash drives.

A G2M NVM Express (NVMe) Ecosystem Market Sizing Report shows the state of the market now and just how fast it has been developing. Soon all external arrays with ambitions for storing tier 1 fast access performance data, will likely have to use NVMe flash drives (or faster media drives; think XPoint-class) and NVMe over Fabrics (or accelerated Ethernet) network access or else face being replaced by arrays that do.

Our feeling is that we are looking at a mass transition in the 2017-20-18 period with installed base migration from 2017 to 2020 and beyond. This could provide a great fillip for on-premises array vendors as performance data storage in the cloud cannot provide equivalent performance unless the apps run in the cloud and similar storage hardware/software is used.

G2M_NVMe_ecosystem

G2m NVMe ecosystem chart

Shaun Walsh, president and managing partner at G2M Inc., says: “We expect the NVMe market to be more than $57bn by 2020 with a 95 percent compounded annual growth rate.”

Yes, a 95 per cent CAGR.

The G2M report predicts:

  • The NVMe market will be more than $57 billion by 2020
  • More than 50 per cent of enterprise servers will have NVMe bays by 2020
  • 60 per cent of enterprise storage appliances will have NVMe bays by 2020
  • Nearly 40 per cent of AFAs will be NVMe-based by 2020
  • Worldwide shipments of NVMe SSDs will grow to more than 25 million units by 2020
  • Worldwide shipments of M.2 NVMe SSDs will grow to more than five million units by 2020
  • NVMe-oF adapter shipments will climb to 740,000 units by 2020
  • Ethernet adapters with RDMA capabilities will be more than 75 per cent of the NVMe-oF market

The G2M report suggests NVMe “will become a major component of new models for compute, fabrics, analytics, application acceleration, systems management and more. There are more than 70 companies that have announced products for the NVMe market and G2M Research has grouped these product offerings into nine categories, including servers and storage appliances with NVMe bays, NVMe all-flash arrays (AFAs), NVMe-oF adapters, three classes of NVMe storage devices, NVMe I/O blocks and NVMe accelerator adapters.”

It has identified moire than 70 supplier companies involved in an NVMe eco-system, suggesting major players include: Broadcom; Cavium; Dell Technologies; IBM; Intel; Lenovo; Marvell; Mellanox; Micron; Microsemi; Oracle; Samsung; SK Hynix; and Toshiba; and emerging startups such as Envemio, Kalray, Kazan Networks, Mangstor, NVLX, and Teledyne-LeCroy.

Get a copy of the report abstract here. (PDF)

The contenders

El Reg sees some 17 suppliers already in or set on entering the NVMe over fabrics (NVMeF) storage array market. They can be grouped into five classes:

NVME-17
  • Established incumbents: Dell (EMC); HDS; HPE; IBM; and NetApp
  • Arriving incumbent: Pure Storage
  • NVMeF-style startups: Apeiron; E8; Excelero; Mangstor; and Pavilion Data Systems
  • All-flash array modified hybrideers: Nimble; Tegile; and Tintri
  • All-flash array startup era survivors: Kaminario and Violin Memory

Oracle should also be classed as an established incumbent and, we suggest, will not be left behind in performance data access by NVMe arrays. It may focus on NMVe fabric access over InfiniBand.

We should also note Chinese system/storage vendors in the wings, meaning Huawei and Lenovo. Then there are other startup, niche and minor array vendors to consider such as DataDirect Networks, Fujitsu, Infinidat, Nexsan, Panasas, and Seagate with ClusterStore.

For all the vendors, NVMe over fabrics technology gives them a shot at building and taking sales from no-NVMeF incumbents. It gives a fresh lease of life to ailing and struggling players, equalising the playing field to some extent.

And it also converges the performance external array technology space with different style arrays, disk, all-flash, and hybrid, evolving towards a consistent all-flash, NVMe drive, NVMeF-class access future.

It could be possible that HPC storage access will start using NVMe drives and fabric as well, which is why DDN, Panasas and Seagate are in the list above.

NVMe and HCI

Tegile CMO Narayan Venkat thinks that NVMe fabrics will be the great equaliser between hyper-converged infrastructure (HCI) systems such as those from Nutanix and SimpliVity, and external arrays. With storage admin being subsumed into system admin, via technologies such as VVOLs, then storage management simplicity equivalence between HCI and shared array systems is effectively achieved.

He says that a problem with HCI systems is that you can’t separately scale capacity and performance, which you can do with shared arrays. You can do this at the system level by scaling compute (servers) independently of capacity (storage arrays), and also with the storage array be separately scaling controllers, the performance storage capacity tier and, separately again, any other capacity tiers.

We’d point out that multi-node HCI systems will spend more resources on inter-system messaging to ensure a single version of the virtual array stored truth as more nodes are added. With a shared array this need to co-ordinate servers with their own portions of the virtual array disappears.

A further point; with DAS PCIe flash access speeds from a shared array then data locality needs, sometimes touted by HCI vendors saying compute and storage for apps on that compute need to be located close together, goes away. All the data on the shared NVMe fabrics array is effectively local to all servers connected with it over an NVMeF-class link.

Consider composable infrastructure too, with racks full of compute, networking and storage shelves, all dynamically put together onto systems as needs dictate - think HPE Synergy - then, at a gross level, a rack of servers, network switches and storage shelves will be common between trad systems, converged systems and HCI systems. All that will differ will be the software organisation and buying process.

Finishing view

NVMe over fabrics access to NVMe drive arrays seems almost certain to be the external array future for storing performance data. As applications, middleware and operating systems are developed to take advantage of it then today’s IO-bound apps will lose that constraint and face a balance if being both IO- and CPU-bound instead, while the servers operate at hitherto unachievable levels of efficiency and applications run faster and do more.

The public cloud will not be able to compete unless both apps and data are cloud-resident and run in hardware and software environments offering similar performance levels.. NVMe drives and fabrics offer a great opportunity to on-premises IT system and storage vendors, and to users, to turbo-charge existing systems and make them go a hell of a lot faster. We’ll drink to that. ®

More about

More about

More about

TIP US OFF

Send us news


Other stories you might like