This article is more than 1 year old

Hybrid cloud’s growing pains – and how to beat them: A guide to raising a good platform, so you can raise a glass later

Our gentle introduction to mixing on- and off-prem kit

Backgrounder Where once it was public, now hybrid cloud is the future – and by hybrid, we mean a mix of public and private. Virtualised, elastic, and on-demand resources hosted by someone else combined with on-premises infrastructure.

While uptake of cloud continues to grow as a whole, Gartner has predicted in particular that by 2020 90 per cent of organisations will have adopted the hybrid approach.

Hybrid gives you access to the benefits and scale-up/scale-down potential offered by public providers while letting you retain sensitive data on-premises and responsibility for backup and recovery. And its data that’s important here. Growing with cloud means spending on big data and analytics. IDC pegs related hardware and software spending growth at a CAGR of 11.9 per cent until 2022, to $260bn.

What does this hybrid world look like? You may have an online storefront with the customer-facing front-end applications hosted on a public cloud but with the backend systems housing sensitive customer data, and backups, on-premise. About 31 per cent of organisations reported following this approach in a survey by IDC.

These cloudy services can run the gamut: from relatively simple productivity, like Office 365, to SAP and hard-core crunching of data workloads and analytics. Just look at AWS, running Hadoop and providing Redshift warehousing, and Athena and Kinesis for interactive and real time analytics.

The shift of data warehouses, Hadoop clusters and NoSQL databases, to the cloud are – according to IDC – driving uptake of cloud. IDC has forecast this market will grow 10 times faster than the comparable market for on-premises analytics solutions over the next several years.

That’s all well and good, but for all the pluses of hybrid, they can also be an Achilles heel for your operations, thanks to outages, latency, security, regulation, and costs.

Many IT departments may be ready and willing to go hybrid, but few are prepared, according to Archana Venkatraman, IDC Europe research manager. She tells us 90 per cent of organisations view multi- and hybrid cloud as a natural evolution of their IT, yet fewer than 10 per cent are multi-cloud ready. That means they lack “a secure, automated, orchestrated, transparent and interoperable multiple cloud-based architecture,” according to Venkatraman.

End of the romance

Major cloud providers are pretty reliable, and specify uptime guarantees of something like 99.99 per cent per year. Yet that means you should expect about an hour of downtime, and a major outage is bound to happen sooner or later.

AWS, for example, suffered its most recent outage – of just over an hour – in June 2018 though this was relatively minor. A monster AWS glitch took down users for several hours in 2017. This year isn’t done, so it’s too early to round up its big outages, though you can get a snapshot of 2017’s big hits and their causes right here.

When outages happen, databases, files, and applications may be unavailable, meaning you may not be able to function as an operation. If an outage lasts for more than a few minutes, it is likely to lead to financial loss or lost custom. In this scenario, your organisation needs to have a backup plan.

For critical applications, this means a backup and recovery process that can kick in and take over the AWOL workload or resources in a short space of time. You could have a mirror of key applications running all the time to fail over to, but this would likely cost too much. The alternative is to provision the necessary emergency infrastructure on the fly, on a separate platform from where your production system runs, and rapidly restore the applications to their pre-outage state so work can continue.

If you design your infrastructure so that it can be dynamically deployed and scaled, you'll be able to quickly roll out the necessary infrastructure to pick up the slack when the primary systems fall over.

Ultimately, backup and recovery is your responsibility, not that of the service provider and, looping back to Venkatraman, this is where many are unprepared. Cloud providers need only deliver the service. After that, managing outages and downtime with replication, recovery, and backup is down to you.

It's a matter of time

Latency is another pain for hybrid. Your IT infrastructure is a tapestry of networking, storage, compute workhorses, and software-as-a-service (SaaS) providers. Each SaaS outfit has its own data centers, regional configurations, and network architectures.

There’s no one single cause for latency, and in the hybrid world the causes can be harder to lock down. These can range from the location of the cloud provider’s data center and raw computer power used to process an application, to the amount of network bandwidth and presence of congestion.

Roy Illsley, Ovum principal analyst, tells us: “The issues with any hybrid cloud solution are centered on how it integrates with the public cloud(s) that make it hybrid. So we have the question of where data resides and can that data be moved to new locations if the application has latency sensitivity?”

How do you circumvent latency? That depends. You may choose to host your data with the same provider of your cloud compute – if regulation and cost allows. You could also reduce your number of network nodes and/or their latency, you can streamline your network traffic to tackle congestion, or and beef up the bandwidth and/or the compute capacity available for processing.

Whatever you do, you’ll need to keep an eye on latency. Ultimately, you may decide to move data and workloads to a different platform, perhaps as part of a long-term migration project or perhaps automatically in response to changing conditions. The benefits of shifting locations to reduce latency or avoid network congestion will have to be weighed against the costs of moving, especially the costs associated with transferring large volumes of data across the world.

And this raises an important sub issue – a pain point many come to realize only once they are committed to hybrid. Many cloud-based storage services operate a complex pricing plan based not just on storage capacity, but also on network throughput and the number and type of requests.

To put this in perspective, analyst firm Storage Switzerland reckons that storing 1PB of data on Amazon’s S3, for example, can cost an estimated $1,515,000 over three years, not much less than the $1,666,000 to store it on an on-premises networked-storage system.

This highlights the need to track usage of cloud resources and to act appropriately, exercising restraint, to expand as part of a strategy, and to shut down virtual machines once they are done rather than leave them spinning. In the hybrid world, you’ll need to know exactly how much you're spending across not one but many platforms, otherwise cost will run away from you and lead to some very embarrassing conversations with the finance or business chiefs in your organisation.

More about

TIP US OFF

Send us news


Other stories you might like