This article is more than 1 year old

Add 'Bimodal IT' to your buzzword bingo card: Faster... more stable... faster. But stable

Collapse cloud, containers and all silos...

Comment Thanks to Gartner, we have a new buzzword: bimodal IT. It’s nothing special actually, just a new way to describe common sense, and the fact that the world – the IT world in this case – is not black or white.

In practice, in modern IT organisations it is better to find a way to integrate different environments instead of trying to square the circle all the time. This means that you can’t apply DevOps methodology to everything, nor can you deny its benefits if you want to deploy cloud-based applications efficiently. (Gartner discovers great truths sometimes, doesn’t it?)

But here is my question, “Does bimodal IT need separate infrastructures?”

Bimodal IT doesn’t mean two different infrastructures

In the past few weeks I published quite a few articles talking about Network, Storage, Scale-out, and Big Data infrastructures. Most of them address a common problem: how to build flexible and simple infrastructures that can serve legacy and cloud-like workloads at the same time.

From the storage standpoint, for example, I would say that a unified storage system is no longer synonymous with multi-protocol per se, but it’s much more important if it has the capability of serving as many workloads as possible at the same time. Like a bunch of Oracle databases, hundreds of VMs and thousands of container accessing shared volumes concurrently. The protocol used is just a consequence.

To pull it off, you absolutely need the right back-end architecture and, at the same time, APIs, configurability and tons of flexibility. Integration is another key part, and the storage system has to be integrated with all the different hypervisors, cloud platforms and now orchestration tools.

Storage for containers? (just an example of the wrong infrastructure)

Now, I haven’t had time to look at them yet, but there are a bunch of new startups focused on container storage. Really? Container storage? Just storage for containers?

It sounds a little bit odd since most of the containers are stateless… In fact, I suppose most of these storage systems expose NFS protocol (or I hope so for simplicity, at least). But, why should anyone buy a storage that works well only with containers? It doesn’t make any sense.

In which enterprise, or even ISP, do you have only containers? Like I said, I haven’t had time to investigate yet, but I will do soon, because specialised storage doesn’t make any sense any more… does it? Maybe it’s only a marketing mistake.

Bimodal infrastructures are the key

No matter what kind of workload or the type of Ops you have, IT infrastructure must be ready to cope with all of them. I think that a bimodal IT/infrastructure has to implement a sort of macro multi-tenancy at its core. In this case we are not talking about multiple users accessing resources, but different technologies or platforms on top of the same infrastructure at the same time.

For example, if your organisation has three different teams for standard virtualization (Vmware?), IaaS cloud (OpenStack?) and next generation cloud (containers?), you have to offer a single horizontal infrastructure that can be quickly configured to offer the right kind of resource for each one of them when needed. Technologies like Software-defined Networking, QoS, advanced storage analytics and monitoring, must be properly implemented to enable such a paradigm… but this is not enough.

Data management and transparency

By adding more and more workloads to the same system you’ll need a different kind of granularity to understand what is happening and to quickly optimise the infrastructure accordingly. For example, even though the classic IO-blender effect is no longer a problem with AFA arrays, some vendors, like Coho Data, have started to analyse workload patterns to automate data positioning, and cache pre-heating.

This brings automated tiering to the next level and allows the deployment of smarter and cheaper storage infrastructures capable of serving a broader set of workloads at the same time. Watch this video from Storage Field Day 8 to understand better what I’m talking about:

Coho_video

Click image to play the video.

The separation between logical and physical layers is fundamental for modern IT infrastructures and bimodal infrastructures in particular. End users hate migrations, especially migrations that involve data; all infrastructure components should be swappable or upgradable without touching it. This also applies to data movements between different storage tiers.

In this same space, vendors are working to build mechanisms to transparently move data volumes between primary and secondary storage as well as the cloud, simplifying backup /DR operations while automating data/copy management.

Closing the circle

Bimodal IT has always existed, even before DevOps. When enterprises started to adopt servers (and client-server applications) after mainframes they had two different ways to manage operations, then it happened for virtualisation and now it’s happening for the cloud. For each new technology stack, organisations have built a new silo and organised Ops accordingly. And Ops has always been faster for new technology stacks than it was for the older ones; it’s easier to operate a virtual environment than physical servers for example.

But something has changed.

In the past, each single technology silo had its own infrastructure stack. Now, thanks to SDN, SDS and next generation storage solutions, it is possible to collapse different infrastructure stacks onto a single larger infrastructure capable of serving all physical, virtual and cloud concurrently, lowering costs and speeding up operations for both legacy and new infrastructures.

More about

TIP US OFF

Send us news


Other stories you might like