This article is more than 1 year old

SolidFire's thin flash density eats more racks than HPE. What gives?

Flagging support for high-cap SSDs could embiggen your data centres

Analysis NetApp's SolidFire arrays needs four times more rack space than HPE and EMC for equivalent amounts of flash storage. It has 1U hardware boxes and these appear to be limited in what capacity SSDs can be supported.

With SolidFire arrays being scale-out nodes, then low flash density per rack unit means you need more nodes to reach a particular level of flash capacity.

SolidFire SF19210 array nodes or appliances have ten 1.92TB SSDs, 3D NAND SSDs, arranged to two rows of five across the front of the enclosure. This gives it a 19.2TB raw capacity. You would need 20 of them to get 384TB of capacity, that's 20 rack units. If you could increase the flash density fourfold then you would only need 5 nodes taking up 5U.

Physically you could fit in ten 7.68TB or 15.36TB SSDs, the 3D ones built by Samsung, into an SF19210 enclosure. This would create, using SolidFire's naming convention, SF76810 and SF153610 nodes with 76.8TB and 153.6TB raw capacity respectively.

Such a hypothetical SF153610 would reach 460.8TB with three nodes, a great improvement on the twenty SF19210 nodes needed currently.

EMC's upgraded VNX, the Unity product, supports up to 25 x 3.2TB Samsung 3D NAND drives in their 2U enclosures now, meaning 80TB in 2U, a 40TB/U density compared to SolidFire's 19.2TB/U.

EMC says Unity will support 7TB SSDs in August and 15TB ones by the end of 2016. This will give it 175TB capacity in August, 87.5TB/U, more than four times better than SolidFire's meagre-looking 19.2TB/U.

HPE is adding support in its 3PAR StorServ line for 7.68TB and 15.36TB SSDs. It started using 3.84TB SSDs in the gen 5 StoreServ 8000 in August, 2015. The StoreServe 8000 has 24 SSDS in its basic 2U enclosure, meaning 92.2TB, or 41TB/U. With support for 7.68TB and 15.36TB SSDs. that rises to 184.3TB and 368.6TB respectively, 92TB/U and 184TB/U. These are almost 5x and 10x SolidFire's TB/U density.

NetApp, SolidFire's owner, supports 15TB SSDS in its latest ONTAP 9 software release.

Why isn't SolidFire doing this, supporting high-cap SSDs? Why is it not offering a better NAND density per U of rack space? Our understanding is that it is a software architecture issue in the Element OS and an update is being prepared.

John Rollason, SolidFire's marketing director, said: "I spoke to Dave Wright on the subject of 15TB SSDs specifically recently.” His views are summarised below:

  1. They aren't the most cost effective $/GB of raw capacity because there is only one vendor and it is a "premium" drive. They are, however, the densest.
  2. SolidFire's starting footprint would be “pretty ridiculous” given that they have a minimum of 40 drives in their current footprint. That's 600TB RAW and over 2.2PB effective – just to start.
  3. Their starting cost would be “equally crazy”. Rollason estimated it as “like $1.5m for four nodes.”

"Over time,” concluded Rollason, “the combination of decreased cost, increased capacity demand, new form-factors that aren't 10 drives per node, and Element OS work will converge to make bigger drives sizes useful and practical, but they just aren't a great solution for us right now."

Comment

Far be it from a basic hack to contradict the boss of SolidFire, but I'd suggest that SolidFire's arrays needn't just support 15.36TB drives and get self-clobbered with a minimum four node configuration costing $1.5m. There are, after all, 3.84TB and 7.68TB SSDs available; HPE started supporting the 3.84TB ones 11 months ago.

There seems to be no reason why, from a product marketing point of view, SolidFire couldn't have starting systems with 960TB drives, or less, and then a ladder of bigger supported drives; 1.92TB, 3.84TB, 7.68TB, and 15.36TB, for those customers who need the extra capacity and are willing to pay for it.

You can mix and match SSD capacity nodes with SolidFire. The company says: "Start with a four node cluster of any SolidFire nodes, and mix and match nodes as you scale to take advantage of the most current, cost-effective flash technology available."

Wright mentions four things that impact on high-cap SSD support:

  1. Decreased cost
  2. Increased capacity demand
  3. New form-factors that aren't 10 drives per node
  4. Element OS work

EMC, HPE and SolidFire parent NetApp all support larger SSDs than SolidFire so we would suggest they don't find the bullet-pointed items above an obstacle. And, as we point out above, you can physically slot hi-cap 2.5-inch SSDs in standard 2.5-inch SSD bays. Moving to a 2U or 3U node form factor will increase capacity headroom inside the box but doesn't seem a necessary thing for high-cap SSD support on its own.

It looks to us, here at Vulture Central, that the Element OS work could be crucial for high-cap SSD support.

Will it be a point release on the Fluorine base or will a future Neon major release (Element 10) be needed? We'll go for a point release and late 2016, or early 2017. ®

More about

More about

More about

TIP US OFF

Send us news


Other stories you might like