This article is more than 1 year old

How to get the fun stuff back in your data centre

Face it, cloud is not fun or cool

The cloud is a fabulous concept. If you want to try something out, or prototype your latest idea, or give yourself a relatively inexpensive disaster recovery setup, get in there and run up a cloud-based installation.

There's something that the cloud lacks, though: it's just not fun or cool. Lists of virtual machines in the Azure management GUI aren't sexy. Neither is the pop-up on AWS that tells you the settings you need to paste into your router to get your VPN up and running. And incidentally, Microsoft and Amazon, I'm not having a pop specifically at you – they're just random examples that apply to the cloud in general rather to than any one supplier in particular.

There are times, then, when stuff is only proper fun when it's on-prem.

Making your infrastructure look good

One of the things that has to stay in your data centre, assuming you have one, is the infrastructure that runs it. There's nothing quite as impressive as a spankingly tidy cabinet, every cable run perfectly along the cable management tray, perhaps colour-coded for important connections.

I've had managers and auditors become wide-eyed and gasp when I've opened the cabinet door on a particularly sexy collection of flashing lights and cables (not, I hasten to add, my own cabling handiwork – happily I used to have a colleague called Chris who was amazing at that stuff).

And if your hardware vendor does cool kit, that's a bonus. My favourite was 3PAR's (now HP's) funky racks with yellow flashes across the doors. A colleague once referred to the yellow rack and the humming of the disks therein as “a box of angry bees”. What I do know is that everyone commented on it.

Sadly, of course, the best any vendor can hope to achieve is second best, because the best-looking piece of kit ever devised has been out of production for years.

Someone told me only this week of her experience rather a lot of years ago when she was a trainee accountant at the UK Atomic Energy Authority … and she went to see the Cray in its data centre. If you think anything can beat a proper Cray (the Cray-1, not one of those poncey 19-inch-rack-with-funky-doors ones they do these days), you're welcome to submit challenges in the Comments section of this page.

Trying things out for real

I come from a world of running resilient global infrastructures. This tends to mean a global network connected to a bunch of kit at each location that largely follows a standard architecture, since if everything's the same it's a breeze to manage. So you have the same firewalls, same family of switches, the same remote management servers, the same fileservers, the same storage devices, and so on at each site; the only thing that will usually change was the number of switches in the stack, reflecting the fact that some offices are bigger and have more users than others. And of course the only real way to test the resilience of your kit is to have a real setup to play with.

Doing it for real has two key advantages. First, you're proving conclusively that it does what you expect. Yes, you can emulate a link failure by downing the port on (say) the WAN router, but it's still electrically connected in some cases. I've seen instances where downing the port didn't cause the link to go down entirely, but pulling the cable did – handy to know when you're testing your failover design.

Demos of physical kit look great

Second, though, is when you're trying to sell the idea of the CFO, or the company's investment committee, so they'll give you the money to actually do it.

Take a bunch of senior managers to the data centre, open up the rack on your test network, run up a funky streamed music video on your laptop, and invite them to do their worst. “Pick a switch and pull the power on it,” you can tell them, and the video keeps on humming.

Stuff the power back in and watch the monitoring alerts all turn green when the switch is back on line. Invite them to pull the cable from the primary (simulated) WAN connection in the knowledge that it's supposed to be resilient; see the video pause for a few seconds while BGP re-converges and then pick up where it left off without you doing anything.

If you're a techie, you think this is cool and you feel smug; if you're a manager you think the techie is some kind of wizard who does weird magic because until this point they thought this whole resilient technology was unfounded bollocks that you made up.

Of course you could have demonstrated this from a distance by electronically downing ports, but being there in person and seeing it for real is worth a thousand semi-artificial demos.

Beating the auditors at their own game

In these days of cloud computing, auditors can be a royal pain in the arse. Particularly the younger ones who are sent to do the initial on-site interviews and whose sole contribution to the process is the ability to write down what they're told.

I love taking auditors to data centres. They're so used to people saying: “Oh, that's in the cloud” or “It's in our service provider's premises … here's a photocopy of their ISO27001 certificate” that they spend their lives with a suspicious look on their face.

So it's great when they ask: “Can you tell me where XXX is stored?” You email the data centre receptionist surreptitiously to tell him to be particularly pedantic about ID, then shepherd the auditors outside, hop in the car, drive down to the data centre, go through the (now overly onerous) entry procedure, open the door of cabinet C23, point to the third disk shelf down, and say: “It's on there.” Even better, you nod to cabinet D14 and mention: “Oh, it's in there as well, and I won't bother showing you the other data centre as it's a long way away, but it's there too.”

Beancounters like boxes you can point at, particularly when they have big black-and-white labels saying “CORP-MailServer-01”. Auditor-baiting is a great game, and you can only do it if you have core stuff in your DC.

But finally, the big one

You may, of course, end up deciding to move your entire production application world into the cloud. It's inexpensive, security isn't regarded as an excessive problem, and support costs generally go down markedly when someone else has to look after the hardware and the software upgrades.

Even if you do, though, the data centre remains the ideal place to do your architecture tests and prototyping – trying things out and seeing how they behave. The example I gave earlier about inviting people to pull the connection out of the simulated WAN link is exactly what I'm talking about: a platform that physically exists but isn't part of the production infrastructure. It's got routers, servers, switches, storage, bits of cable, its own internet connection, and preferably additional tools such as a WAN emulator, dedicated PC for network monitoring, and so on.

The cloud has a fairly core problem: it's the cloud. You have no idea of the underlying hardware, or how far server A physically sits from server B. You have high-level monitoring but nothing below a very abstract set of statistics, so although you could use something like PerfMon/TypePerf at the Windows level (or the Linux equivalent if you're an Open Source kind of person) you have no idea what's going on on the network.

Particularly for an application specialist this is a big deal: in any networked application each contact between endpoints has several phases, from the initial DNS lookup right at the end of the delivery of the results to the user. In the cloud you just can't see this – so research on how your apps perform in this regard needs you to have some kind of data centre presence with some real, physical equipment in it.

And the beauty is that you can often equip your R&D “lab” without costing the earth. After all, you didn't throw out those end-of-life switches or routers, did you? In most cases you replace kit not because it's completely unusable but because the vendor no longer supports it and hence it's no use in a business-critical infrastructure. So if you gave me a few Cisco 2600s routers, 3750G switches, ASA5510s, four-year-old Dell servers and the like I'd be perfectly happy. Although you'll need to buy some stuff, it won't be an expensive ground-up purchase.

And the point is I'd be able to let my gang run riot with them and practise stuff. Flash an ancient copy of the ASA firmware onto a 5510 and let them figure out how to upgrade it to the latest release without breaking the VPN connection. We can unplug things and see how the database copes, and whether it manages to get its bearings and pick up where it left off once the connection comes back.

Engineers are like children: we like stuff that's new, and we like trying stuff out and finding things out for ourselves

Buy some assorted hard disks – SATA and SAS spinning disks and various types of Flash drive – and do real benchmarks and show yourself just what the difference is. Record what you see, because when you then look back to the cloud for your production systems you'll be so much more convinced that (say) the extra performance of the SSD storage option is worth it – because you'll remember that “Holy crap!” moment when you saw how fast the benchmark was on your own physical box, run by your own fair hand.

Research is fun

The cloud abstracts everything too much to be useful for infrastructure research and testing, and so an in-house alternative is the obvious way to go.

And the thing is, engineers are like children: we like stuff that's new, and we like trying stuff out and finding things out for ourselves. And if this can involve real engineering, with real metal boxes (preferably with the lids off where possible), and flashing lights, and bits of electric string, all the better.

And this means physical kit. On our premises. Combine this with the undeniable fact that experimenting is fun, and it's the obvious way of getting the fun stuff back into the data centre. ®

More about

More about

More about

TIP US OFF

Send us news


Other stories you might like