Cloud computing is no longer viewed as some exotic new technology and has become an accepted part of an organization’s IT toolkit for meeting business needs. At the same time, cloud is less mature than other IT sectors, and can prove to be more complex than unwary adopters may expect.
One of the promises of cloud computing has long been that users should be able to move workloads from their own data center to that of a cloud service provider and back again, if required, or even between public clouds. This might be because an application or service calls for more resources than the organization has available, or simply because it is more cost-effective to run it off-premises at that time.
Same old same old?
Despite this, cloud adoption has broadly followed the pattern of private clouds being used to operate traditional enterprise workloads, while new-build applications and services are developed and deployed in a public cloud platform such as AWS. Even where traditional workloads have been farmed out to a service provider, this has typically been a case of colocation or of a hosted private cloud arrangement.
According to Ovum principal analyst Roy Illsley, this is because many organizations are still at an early stage of cloud adoption, and are just looking to get their foot on the first rung of the ladder.
“We are not really seeing companies have [workload mobility] as their first concern. What they are really looking to do is ‘lift and shift’ to the cloud, so they might have this three-tier app or database based on legacy hardware, and they want to know how to get that to the cloud as a first step. Only once they’ve done that, they start looking at how to transpose that workload, change that workload, and think about moving it somewhere else,” he said.
There are good reasons why workload mobility has never really taken off. VMware with its vMotion feature has supported live migration of virtual machines from one host server to another for over a decade, but it is one thing to perform this inside a data center and quite another to do it from one data center to another.
Things get more complicated if you want to move a workload from your own cloud to another one that you do not control. It is not practical to move a virtual machine to a cloud based on a different platform, because of differences in the hypervisor, the APIs and the management tools used. Even if it is based on the same platform, you may not have the same level of management oversight and control as you do over workloads running on your own infrastructure.
Then there is the fact that enterprise workloads seldom operate in a vacuum; they depend on other resources to function, such as shared storage, an SQL database server or directory service. Unless such services are also replicated to the cloud, traffic between the workload and these resources will have to be routed to and fro across a WAN connection instead of across the data center itself.
Perhaps it is this reason that led VMware to change direction and ditch its vCloud Air service for Cloud Foundation, a platform that it describes as a self-contained software-defined data center (SDDC) that can be provisioned onto commonly used public clouds such as AWS and IBM SoftLayer.
However, the prospect of workload mobility could move nearer to reality thanks to containers. Most people are familiar with containers thanks to the efforts of Docker, but the ecosystem is made of various technologies and implementations.
What all container platforms share is that they enable a piece of software and its dependencies (such as code libraries) to be wrapped together into an isolated space – the container. Multiple containers can run on the same host, like virtual machines, but containers are more lightweight on resources, and a given server can operate more containers than virtual machines.
“The containers approach gives developers the opportunity of writing a new cloud-native app that, provided you’ve got support for containers – and if you’re on Linux, you will have – you can then operate that container on a platform that runs in various other cloud environments,” said Illsley.
Containers grow up
Containers are a less mature technology than virtual machines, and the various platforms are thus still in the process of developing and perfecting associated components such as orchestration, monitoring, persistent storage support and lifecycle management tools. These are largely essential for operating container-based application frameworks at any kind of scale, as the Internet giants such as Google or Facebook do.
Docker has established its platform as the leading format for packaging and running containers, but there are several tools available for orchestration, such as the Kubernetes project, which originated at Google, or the Mesos project from the Apache Foundation, as well as Docker’s Swarm tool.
Kubernetes is integrated into several enterprise platforms, such as VMware’s Photon, Red Hat’s OpenShift application platform and even Microsoft’s Azure Container Service. Meanwhile, Mesos is used by many large web companies, such as Twitter, Airbnb, and eBay as it can scale to manage tens of thousands of nodes running containers.
The elephant in the room is that containers are tied to a particular operating system kernel. This is typically Linux, as container platforms such as Docker build on features in the Linux kernel, but Microsoft has recently begun adding support for containers into Windows Server and its Azure cloud service.
While a container image created on a Linux system should run on any other Linux system, you cannot take a container image created for Windows and run it on Linux, and vice versa. Having said that, Microsoft announced at DockerCon in April 2017 that it will support Linux container images in Hyper-V Containers, under which the container lives inside a small virtual machine.
Containers are unlikely to completely replace virtual machines, but for many workloads, especially scale-out distributed workloads, they are becoming the preferred method of deployment. This is not just because they are easier and quicker to provision, but also because they are easier to migrate from one system to another, finally beginning to fulfill more of the promise of the cloud.
This article appeared in the August/September issue of DCD Magazine. Subscribe to the digital and paper editions here.