This article is more than 1 year old

Prepare your data centre to face the future

It’s all in the design

When you are trying to persuade your company to spend a pile of cash on a new installation, you can be certain it will want to be sure the installation can support the business for the coming years.

Given that the average technology crystal ball is cloudy at best, how do you evolve your data centre while protecting against a potentially unpredictable future?

There are two key aspects to think about in order to secure the future of your installation. The first is making sure you can continue to upgrade the hardware, operating system and software you are currently running. The second is thinking about how you might connect it at each layer to other stuff.

Future-proofing

The first part is fairly simple. When we consider upgrading the platform we are asking ourselves whether we can install something new without expending too much effort. The upgrade might be from the same vendor or a different one.

At a storage level we have a variety of technologies – Fibre Channel, iSCSI and FCoE being the core three – and all have discernible paths into the future.

In the server arena, the various CPU architectures are pretty robust. We don't see VAX and MIPS any more; the IBM Power series will be a going concern; and the Intel family and its clones are sure to be long-lived as long as you have shifted away as soon as possible from 32-bit CPUs onto 64-bit processors.

Now that SCO has finally gone belly up, we have plenty of operating systems – Windows, Linux, IBM i and so on – with a promising future ahead of them (and if you are still running SCO, you deserve everything you get).

And in the fundamentals of the network infrastructure, we have overcome all the madness of the 1990s: we have binned 100VG-AnyLAN and ATM to the desktop, IPX and DECnet and we have accepted that IP over Ethernet is the way to go.

Working together

Now let's look at integration and interoperability, starting with storage.

There is more than one way to provide an interface between the server hardware and the storage hardware.

Even if, for example, you use a dedicated Fibre Channel SAN, you may well have the option of running up iSCSI using the hardware you already have – or perhaps with a minimal investment in some iSCSI-optimising NICs.

If you don't presently have this option, look at how you might add it, because you are very likely to need it before long. The beauty of it is, though, that the evolution is simple to implement.

Now for servers. At the physical server level, life is reassuringly easy. You are either Intel-based or you are not. If you are, the main step to take (aside from heeding my earlier advice to make sure you are using exclusively 64-bit hardware) is to understand the physical processors you have.

Virtue of virtualisation

That is because server virtualisation happens above, not in, the server hardware so it doesn't matter that much if you have, say, an IBM cluster and an HP cluster next to each other.

If you are performing a vMotion move (shipping a server from physical host A to physical host B with no interruption), the badge on the front of the box doesn't actually matter. What does matter is what capabilities you have told the hypervisor the underlying processors have.

If a virtual server expects a particular feature of a processor to be available and you try to move it to a CPU without that feature, it will tell you to get stuffed (or if you have been rude to it that morning, it will move and then crash).

But as long as you make sure you always configure the hypervisor with the lowest common denominator hardware, you are leaving your server choice open.

If you are not Intel-based, you are likely to have an easier job of dealing with hardware mismatches because you will be working within one family (for example IBM Power series CPUs).

Compatibility between versions and variants will in fact be easier than working with, say, Intel and AMD processors in virtual servers.

Ever had problems moving from one Power device to another, or from one Apple device to another? No, I thought not.

Mutual understanding

At the operating system level, interoperability and future-proofing are really very simple because everything is sitting on the IP-based Ethernet network I mentioned earlier.

All of the popular operating systems have the ability to communicate with each other and even to exploit each other's proprietary protocols. More importantly, this capability constantly gets better rather than worse: look at Linux's ability to integrate with Active Directory, for example, and how it has evolved over the years.

No problems there, then.

The primary pain point for the future-proof data centre is the network

So storage, servers and the operating system have all been pretty straightforward to deal with, and the reason for that is simple: the primary pain point for the future-proof data centre is the network.

Within the data centre, the concepts we need to implement in the network are long-established and generally well understood – IP, Ethernet, Spanning Tree, LACP, OSPF, BGP, access control lists, admission control and so on. It is when we step outside our private data centre that things get complicated.

The looming cloud

The cloud is much to the fore at the moment. Many would say that cloud is simply a new world for an old concept of hosted or managed services and applications, and they would largely be right.

Whatever we call it, though, we are inevitably going to want to connect our data centres to more and more external services and applications, which means that we need, right now, to be making sure we have the right structure to do so. So we need to look at the following.

Landing points for external systems: if you are connecting to an external service you will want to land it on a firewall, or at least a tightly configured router, possibly with NAT or VPN landing capabilities built in from day one. That is because you may need to cater for remote IP ranges overlapping local ones.

Link optimisation: more and more remote services have the ability to optimise your connectivity into them so long as you use the right technology (for example cloud storage services that use RiverBed's Whitewater optimisers, for which all you need is the right box at your end).

Directory service federation: many external services are able to integrate with your directory service, but you may need to establish a gateway to provide this integration (for example read-only connectivity into your Active Directory database via ADFS).

Layer 2 network extension: technology that tunnels layer 2 traffic over an IP WAN, so that you can have a layer 2 segment magically span multiple sites even though there is no native layer 2 connectivity between the two.

The problem is that some of these concepts don't yet have an obvious implementation method.

The first one is dead easy, because NAT is well understood and a VPN landing point is likely to be IPSec based.

Off the beaten track

Similarly the third one, because Active Directory is sufficiently common to be supported by most third parties. For those that don't support it you can use the same hardware and software and present an LDAP stream in parallel.

Link optimisation is a bit harder, though. In our example we mentioned Riverbed hardware but you can't simply whack in a Whitewater gateway and assume you will be future-proof.

There is no guarantee that the offsite storage provider you select next year will use compatible hardware. Similarly on LAN extensions, the concept is so new that there really isn't an obvious product to go with at the moment.

The bottom line is standards, of course: if there is a standard (or a de facto one in the case of Active Directory) then go for it.

Architect your network so you can bang in a landing point for an external service and run up a virtual server and play with ADFS. But when you have done that, move away from the standards a little and dip a toe in the water of “popular but non-standard”.

As it costs you next to nothing in money or time to try, say, connecting via VPN to Amazon's cloud services, just to give it a whirl so you will be well placed to adopt it in a few months' time when the business decides to go for it.

Conclusions

Within your own data centre, then, it is reasonably straightforward to make yourself future-proof: just don't make silly decisions and buy any wacko Betamax-style network products.

The trick is to realise that the future will actually involve your data centre being connected to a wide variety of externally hosted services and applications and to be in a position to do something about it.

Actually, no, scratch that. The trick is to give the business types a major grilling on the kind of stuff they think they might want to do in the future, establish at least the core connectivity and integration requirements that might soon thunder over the horizon, and do something about it now. ®

More about

More about

More about

TIP US OFF

Send us news


Other stories you might like