This article is more than 1 year old

Balancing act for servers that flash the cache

Lazy storage arrays making data-hungry servers wait

Comment Multiple flash locations in the server-storage stack are upsetting balanced I/O conventions and making overall system design much more difficult.

Designing and implementing server-to-storage systems is going to become very much harder, because existing assumptions about server and storage I/Os per second (IOPS) handling are being swept aside. Virtualised, multi-core, cache-enhanced servers can digest and generate data faster than many hard drive storage arrays can deliver and receive it. How do you build balanced server-storage systems when this is happening?

Let's look at the basic server-to-external-storage stack. We start with a server motherboard containing four, six, even eight-core servers which talk via memory and a PCIe bus to an I/O adapter. This links across a network to a storage array controller, which is connected to hard disk drive shelves. Imagine these are virtualised servers and its easy to see, put crudely, a server engine room capable of handling thousands of I/Os per second (IOPS) talking to a storage array only capable of handling hundreds of IOPS.

To get the storage array IOPS capacity up to the server's IOPS level will require a large increase in array spindles or solid state drive (SSD) use, and increased controller I/O capability. This can be done by using flash memory caching but, unfortunately, flash storage can also be used as a server cache, in various places on the server side of the stack, to increase the server's IOPS rate, thus unbalancing things again.

We can envisage server motherboards coming with flash modules on them, acting as a cache between the server engine and its data store. Intel's Braidwood technology is heading this way, it appears, and will increase a server's hunger for data. Then there could be a flash store connected to the server's PCIe bus: another cache. Fusion-io and Violin Memory both have such flash-based I/O acceleration cards, as does Oracle's Sun-powered Exadata server. This caching also increases the I/O burden on the connected storage.

Next there could be a flash cache on the I/O adapter used to connect to the storage. Adaptec is showing the way with its data conditioning ideas and an Intel-supplied flash cache addition to its 5000 and 2000 RAID controllers. We have three potential tiers of flash cache here all increasing the burden on the storage array.

Skipping across the network link to the storage array we could have a flash I/O accelerator card in the array controller. This is what NetApp's PAM (Performance Acceleration Module) is, admittedly in DRAM form initially, but now with a flash version announced, too. That makes four flash locations with the fifth being SSDs actually replacing hard disk drives and providing a so-called tier zero of storage. Virtually every storage array supplier is doing this, with the majority using STEC SSDs and a few settling on Intel.

Finally the sixth flash location is a replacement for the hard drive array itself. This is what Texas Memory Systems (RamSan), Sun (FlashFire), and also Violin Memory (1010 Memory Appliance with network head) are doing, and also what Fusion-io might do with its ioSAN. These can be viewed as flash data stores, not caches, coming in either direct-attached flash (flash DAS) or network-attached (a flash SAN) form.

It seems a nonsense to have flash in all six locations. We might say we could sensibly add a flash cache to servers and/or have either a flash store or a flash-enhanced storage array. The flash-enhanced storage array would have a cache for hot data or else incorporate a flash store to augment a high-capacity and cheap bulk data SATA hard drive store.

A system vendor - say Cisco (soon), Dell, HP, IBM or Oracle/Sun - would design systems to provide server and storage IOPS-handling and data capacity in the most balanced, scalable and cost-effective way. Oracle's Exadata 2 is an example of this. A server and storage system integrator and VAR will also need to do it, but not have the same resources as tier one systems providers, and probably do a clumsier job. A data centre operator buying servers and storage separately will also have a complicated balancing act to accomplish balanced IOPS handling across its servers and storage. Tools are going to be needed.

Its not clear where these tools will come from and it's likely that the need to accomplish a technically different balancing act may help drive a focus on integrated server and storage systems from suppliers and away from the idea of treating servers and storage as relatively separate purchases. This aspect of flash could prove to be quite disruptive. ®

More about

More about

More about

TIP US OFF

Send us news


Other stories you might like