This article is more than 1 year old

Hey blockheads, is an NVMe over fabrics array a SAN?

No, says Datrium, 'cos you can't share data. E8 sort of agrees

Analysis What is a SAN and is an NVMe over Fabrics (NVMeF) array a SAN or not? Datrium says not. E8 says... maybe.

A Storage Area Network (SAN) is, according to Wikipedia, "a network which provides access to consolidated, block level data storage... A SAN does not provide file abstraction, only block-level operations. However, file systems built on top of SANs do provide file-level access, and are known as shared-disk file systems."

Techopedia says: "A storage area network (SAN) is a secure high-speed data transfer network that provides access to consolidated block-level storage. An SAN makes a network of storage devices accessible to multiple servers."

It seems simple enough, but there is a wrinkle that affects NVMeF arrays.

Techterms states: "A SAN is a network of storage devices that can be accessed by multiple computers. Each computer on the network can access hard drives in the SAN as if they were local disks connected directly to the computer."

This allows individual hard drives to be used by multiple computers, making it easy to share information between different machines.

That, for Datrium, is the kicker. Co-founder and CTO Hugo Patterson told a Silicon Valley press tour group: "NVMe in a shared chassis look like an internal drive – so it's not shared data. It's not a SAN. You cannot run VMFS (VMware file system) on top of that."

Hugo_Patterson

Datrium CTO Hugo Patterson

He's saying that SAN users can share data and drives as well as storage chassis. But NVMeF array users cannot share data or drives. Why not?

The idea starts from having a local drive in a computer, client device or server. It is local to its host and private, inaccessible to any other computer. This is the case with a PCIe-connected SSD addressed through an NVMe driver.

With NVMeF there is a remote direct memory access from a host server to a drive in an NVMeF array. The host server sees a another local drive and it is assigned to that server, mapped to it. No other server can access that drive, which means that they can't access the data on that drive or drives.

Well then, let's abstract the drives and add some sort of drive/volume manager. Where? We'll do it in the array controller; it sees all the drives and receives all the IO requests. But if this is done then it breaks end-to-end NVMeF, meaning latency is added and the IO operations get longer, meaning the NVMeF equals local drive access speed idea is broken, and controller bottlenecking is an issue as a Datrium slide shows.

Datrium_Slide_1

If you have direct host server access to the array's drives across NVMeF then the array is effectively controller-less, and just a bunch of flash drives (JBOF), again as a Datrium slide shows:

Datrium_Slide_2

Datrium supports NVMe drives but does not have an NVMeF array in its product line. It's thinking about it. The company says that NVMeF inside an array doesn't solve the host-array network latency problem.

E8 point of view

Datrium doesn't have an NVMeF array. Startup E8 does, a dual-port NVMeF JBOD – 24 x 6.4TB SSDs. What does it think about an NVMeF array or JBOF being or not being a SAN?

Founder and CEO Zivan Ori said: "A SAN? That's associated with Fibre Channel and SCSI. I would call it NVMeF. This is a SAN replacement or next-generation of SAN. It may be a bit confusing to call it a SAN."

He thinks dual-controller arrays won't get the benefit of NVMeF unless they have direct server to drive RDMA links. If they don't, they have to go out of the RDMA path and start using the array controller stack, receiving added latency for their pains. Ori thinks it's an issue that Pure is going to have to confront.

What happens if E8 customers do want to share data in the E8 JBOFs? A clustered file system is one answer and he talked about a data warehouse customer running a SAS statistical analysis application. Before E8's system was installed the SAS application ran in servers with local NVMe SSDs:

E8_SAS_slide

Afterwards, GPFS – IBM's Spectrum Scale parallel file system software – was used instead of XFS and the SAS nodes parallelised IO access to the E8 shared NVMe storage.

Distributing controller-level intelligence

E8's system has an agent in each accessing server, as the diagram in the lower left of the slide below illustrates:

E8_scale_out_architecture

Interestingly Datrium has a similar concept – it has the controller logic in its array distributed upstream to its accessing servers. It believes accessing server logic is required to provide storage array controller-type functionality in an NVMe JBOF environment:

Datrium_Conclusions_slide

The final point in the Datrium slide above is: "Server-powered data management required to do more."

In other words, we think, the volume management-type functions, previously done by an array controller, have to be carried out in some way in the accessing servers before the NVMeF IO request is initiated by the server.

Further, when there are multiple servers accessing the NVMeF JBOF then this volume management function would seem to need being distributed across the accessing servers so that data accesses can be coordinated.

We suspect Datrium, which provided the press tour with a seven-slide sequence explaining why an NVMeF JBOF was not a data-sharing SAN, probably has technology under development to fix this problem. Why else take seven slides to lengthily explain why it isn't being done at the moment? ®

More about

More about

More about

TIP US OFF

Send us news


Other stories you might like