This article is more than 1 year old

Orchestration and the server environment

Just pie in the sky?

Workshop Few words in the IT industry’s vocabulary are more grandiose than ‘orchestration’, evoking images of symphonic movements, rows of groomed musicians and wild-haired, baton-pointing conductors. Just how the term came to be used for the allocation of server resources must leave IT managers more than a little flummoxed, however.

It’s not that things are static in the server room – far from it, as new and changing requirements keep stress levels hovering just a fraction above what might be considered healthy. But orchestration and all it suggests – the provisioning, movement and re-allocation of resources as and when necessary, presumably requiring no more than a flick of the coat tails and a poke of the baton – remains somewhere in the distance for most organisations.

The term itself may have been in use for decades, but it really only came into vogue when x86 servers first started to be considered as appropriate replacements for what had previously been more proprietary hardware. It seems almost a lifetime ago when you had to choose a make and model of server from one of the big hardware vendors of the time – IBM, Sun, HP and so on (go back a bit further and we have DEC, Tandem and all of the rest). But then ‘industry standard servers’ came into vogue, first in rack-mounted, then in blade form.

It was probably this latter wave – coinciding as it did with the dot-bomb and subsequent drive towards consolidation – that triggered ideas around orchestration. Start-ups such as ThinkDynamics (quickly snapped up by IBM) proudly boasted how they could configure a server in just a few minutes, using a pre-defined template (and no doubt, a few scripts running behind the scenes). It all sounded great – particularly when put against the familiar challenge of server allocation taking days, if not weeks.

Orchestration promised – and indeed continues to promise – so many great things, not just in terms of rendering IT operations more efficient, and service delivery more effective, but also enabling greater visibility on the server environment. Building on top this was – and is – the idea of chargeback: if the IT manager is sufficiently au fait with who is using what, this offers the opportunity to at least tell different parts of the business how much its IT is costing, even if no money actually changes hands.

But here we are, about to start a new decade and this brave, new world of server provisioning, resource allocation on the fly, charging back of IT costs remains somewhere in the dim and distant future for the majority of IT departments.

We want to know why you think this is – do you put it down to the fact that it was only a mirage in the first place, a feature of marchitecture rather than architecture? Or perhaps we’re just not quite there yet, and some technological pieces (perhaps beginning with the letter ‘v’) still need to be in place before it can happen. Maybe the problems aren’t in technology at all, but lie more in the politics of your own organisation, silo-based mentalities, systems ownership culture and resistance to change.

Whatever is the case, we’d be interested in your views.

More about

More about

More about

TIP US OFF

Send us news


Other stories you might like