This article is more than 1 year old

How hyperconverged infrastructure enables true hybrid computing

Simplicity is the key

Sponsored Hybrid cloud has existed as a concept for almost as long as cloud services have been around, with the result that everyone now believes they understand it. But the expectations of what hybrid cloud is and what it should deliver have changed as cloud services have evolved, and so it is worth reconsidering what ought to be the best way to approach a hybrid cloud strategy.

A good working definition of hybrid cloud is a combination of on-premises private cloud resources plus resources from a public cloud service provider. There should be data portability across the two environments - otherwise, your data center and public cloud are operating in two silos without any apps or data exchange across them. Extend that to more than one public cloud provider and you can call it multicloud, but it still fits the hybrid definition. Not surprisingly, the growth in adoption of cloud services means that most organizations have access to at least some public cloud resources in addition to their on-premises infrastructure.

But enterprise expectations have shifted so that hybrid cloud today means more than just having a bunch of servers and other infrastructure on-premises while also making use of a few services from, say, Amazon’s public cloud. There is a risk of creating more silos and adding complexity, so careful planning is needed.

But broadly speaking, the expectation now is that there should be some level of integration between the two domains, such that the customer can move workloads between them if they choose. The ultimate goal of hybrid cloud is that an organization should be able to treat public cloud resources as if they were merely an extension of their own infrastructure, overseen by the same management tools.

The question is how to get to this hybrid cloud nirvana? It is relatively simple to provision infrastructure on a public cloud, but it could be a lot more complicated to link back to your premises. And maybe you want to move that infrastructure and the workloads running on it to a different cloud, for cost reasons, perhaps, or new data regulations. How easy would it be to do this?

Unsurprisingly, most organisations do not want to get locked into one cloud platform irrespective of its value, and so they need to be smart enough to retain architectural control of their infrastructure.

First, build your private cloud

In a traditional setup, applications typically each run in their own infrastructure silo with its own servers and storage. Taking a step back for a moment, enterprises have long been advised by analysts and consultants that their cloud strategy should begin with modernising their own data center infrastructure to make it operate along the same lines as a public cloud.

This involves changing the way that infrastructure operates by moving to a private cloud comprising a pool of virtualized infrastructure. Software-defined compute and storage resources can be drawn on demand as required by new applications and services.

Many enterprise applications were developed using the three-tier architecture in which presentation, application processing, and data management functions are physically separated. Typically, the application logic runs on one or more application servers, while the data layer comprises a relational database running on a separate set of servers, backed by a dedicated storage area network (SAN). These separate physical servers can be consolidated into a private cloud as virtual machines, so they are no longer in separate infrastructure silos.

It is understandable if companies want to keep a mission-critical application that their business depends upon running on its own infrastructure, especially if they intend to replace that application with a new system eventually.

However, for new-build infrastructure, organizations would be well advised to choose a platform that offers cloud-like flexibility and automation with virtualization of resources built in from the start. It is no coincidence that HCI has seen strong growth in sales compared with standard server platforms over the past several years. HCI was developed initially as an easier way to implement infrastructure for running virtual machines, rather than building the infrastructure out of discrete servers and SAN storage components, as in traditional SAN-based architecture, which can be complex and costly to configure together.

HCI is based on the concept of creating a pool of resources from a highly flexible set of infrastructure building blocks using advanced distributed systems technologies. Each node integrates compute, storage, networking and virtualization into a single enclosure, and the nodes are clustered together to create a virtualized pool of compute and storage resources. The pool can be expanded simply by adding additional nodes.

The key part of HCI is in the software layer, which virtualizes everything and creates a distributed pool of storage using the combined CPUs and storage from all the nodes. In essence, the platform is the software, as these days the hardware need be nothing more than industry-standard servers filled with storage, typically high-speed solid state drives (SSDs).

The HCI software layer automates many common infrastructure tasks, so that users can quickly stand up new applications and services without having to file a helpdesk ticket to get IT staff to provision new servers and storage.

One of the key benefits of HCI combining infrastructure resources into a unified platform is that it brings everything under a single management console, in comparison with traditional infrastructure that may comprise servers, storage, and networking from a variety of vendors, each with its own separate management tools.

Making hybrid simpler

Returning to the question of a hybrid cloud strategy, one of the major insights that the IT industry has come around to is that it all becomes much simpler if you can somehow contrive to deliver a consistent experience across the on-premises and public cloud environments. This entails providing the means for consistent operations, so that IT teams are able to deploy applications seamlessly across public and private clouds, and move workloads with ease, without having to undergo any application refactoring, between environments when the need arises.

The fly in the ointment is that each cloud provider does things differently. Each of the major clouds has its own networking constructs and application programming interfaces (APIs) for provisioning virtual machines, accessing storage, and configuring networks, not to mention differences in the attributes between virtual machine instances on different cloud platforms.

What is needed is some sort of abstraction layer that can hide all of the complexity of the underlying cloud platform from workloads, a kind of cloud portability layer, if you like. HCI is one way of providing such a seamless portability layer that stretches across multiple environments, including on-prem and public clouds, since it was originally developed as a means of hiding all the complexity of the physical infrastructure needed to stand up an on-premises private cloud. A private cloud solution built using HCI makes it much simpler and operationally efficient to extend your on-premises infrastructure management layer to public clouds, creating a new kind of HCI - hybrid cloud infrastructure. Call it HCI 2.0, if you will.

Unsurprisingly, traditional on-prem HCI vendors have seen this opportunity and begun to offer versions of their platform that can operate both on-prem as well as on public clouds, typically using the bare metal machine instances that many cloud providers support. To extend the on-prem HCI software to public clouds, the HCI vendor uses the public cloud bare metal nodes as the underlying hardware servers instead of servers in the customer’s own data center.

This sort of hybrid cloud deployment allows organisations to use the same management tools to monitor and control workloads running on both on-premises and on the public cloud clusters. The management console should see them as simply clusters of HCI nodes regardless of where they are running, and enable workloads to be moved between clusters. This is the beauty of HCI 2.0 - extending the simplicity and efficiency of traditional HCI on-prem to multiple public clouds, creating a truly seamless Hybrid Cloud Infrastructure layer across data centers and multiple public clouds.

Contrast this with a traditional three-tier infrastructure architecture, where setting up a hybrid cloud would mean having to build hybrid connectivity with separate compute, storage, and networking elements at each end, and it’s easy to see how the complexity can quickly get out of hand.

However, there are further advantages to using HCI as an abstraction layer. The most advanced HCI platforms have additional built-in capabilities, such as data replication to support disaster recovery, which means that organizations can easily configure failover policies between clusters in the cloud and on-premises, usually without needing any application refactoring or code changes. This makes it much simpler to migrate apps across on-prem data center and cloud boundaries.

One consideration that potential customers should watch for is whether the HCI platform they are considering allows them to use their existing public accounts and networking setup, rather than requiring a separate cloud account, or they may not be able to reuse the credits from their public cloud to count towards the cost of the hardware nodes. Being able to reuse existing networking setup also greatly simplifies the operational complexity. If you do not need to manage additional networking overlays to migrate your apps, your hybrid cloud journey will accelerate much faster.

Enterprises are, of course, increasingly looking to adopt cloud-native technologies as part of their digital transformation strategy, which typically means using containers or serverless computing for new applications and possibly refactoring existing applications to be containerized, where appropriate.

However, cloud-native techniques make a virtue out of being stateless and ephemeral, which means they are not necessarily well suited for the kind of enterprise applications that organisations depend on for day to day operations, such as databases. Organizations could use a cloud-hosted database-as-a-service (DBaaS), but this again risks lock-in to a single cloud provider.

In addition, there is considerable risk, time, and cost in refactoring enterprise applications, and many organizations are likely to resist making such changes. Where agility and cost is a concern, moving enterprise applications onto HCI instead means they enjoy benefits such as, to take one example, the ability to unify security policy enforcement across on-prem and public clouds, thus making it much faster to catch and fix security loopholes. Also, this lift and shift to HCI gives organizations the flexibility to migrate to cloud, if they wish, at a time of their choosing.

HCI is morphing quickly into “HCI 2.0”. Built on the foundations of hyperconverged infrastructure, HCI 2.0 extends beyond the on-prem data centers to include public cloud infrastructure. And as HCI vendors make their platforms available across more cloud platforms, enterprises will find that HCI is a sound choice for data center modernization. HCI 2.0 also provides a portability layer that forms their best route to a hybrid and multicloud strategy, which lets them lift-and-shift workloads to the cloud without ceding control to their cloud provider.

Sponsored by Nutanix

More about

TIP US OFF

Send us news