When Amazon announced it was adding bare metal instances to the portfolio of server options available to users of its Amazon Web Services (AWS) cloud platform, the move sparked much speculation. Would it have an impact on colocation and hosting providers, for example, as physical servers have traditionally been the domain of those companies?

Amazon unveiled its new Bare Metal service as a public preview at its re:Invent conference last November, enabling users to access an entire physical server rather than the virtual machine instances it mainly offers. Virtual machine instances typically offer just a subset of the capabilities of the physical machines they are running on.

Bare metal…

Virtualization
– Thinkstock

So why should users bother with virtual machines at all? Because they are more flexible. Many cloud service providers operate a “pay-as-you-go” model, whereby users can provision a virtual machine when required, then halt or retire it when no longer needed, and pay only for the resources used when the machine is running.

Why are Amazon’s bare metal services different from the physical servers which users can own and operate in colocation providers’ facilities? Because cloud service providers take care of operating and managing the physical servers and all supporting infrastructure. Under the colocation model, the user organization usually has to purchase and manage its own equipment and rent space in the provider’s data center to host it.

Amazon also claims that its Bare Metal instances have the same elasticity and scalability as other cloud instances, meaning that customers can provision them in minutes and scale them up or down as is possible with existing types of instances.

AWS is not the first cloud provider to add a bare metal capability to its portfolio – Rackspace, Oracle and IBM have offered bare metal servers for several years. However, Amazon’s Bare Metal servers are notable for including dedicated hardware (a platform called Nitro) to offload network and storage handling in order to make as much as possible of the bare metal performance available for the user’s application.

Customers have a choice between physical servers, bare metal servers and virtual machines, along with another option which is currently fashionable. Containers allow workloads to be deployed with a smaller subset of the functions of a physical server.

…virtual machines…

Virtualization on servers is a way of dividing up system resources so that multiple users or workloads can run independently on the same system without interfering with each other, and its roots go back to the first multi-user systems of the mainframe era. For example, IBM’s VM (virtual machine) mainframe operating system was launched in 1972.

Virtual machines came to x86 servers came courtesy of VMware, when it unveiled its ESX Server and GSX Server products in 2001. On x86 servers, the main driver was workload consolidation. Before virtualization, corporate servers would often be running single workloads and operating at utilization rates as low as ten percent, meaning they were idle much of the time. Converting them into virtual machines meant that several loads could be operated independently side by side on a single physical server, reducing the overall number of servers an organization needed to have.

GSX Server allowed users to operate virtual machines on top of an existing operating system such as Windows, and is thus an example of a Type-2 hypervisor, whereas ESX Server ran on the bare metal and is thus an example of a Type-1 hypervisor. ESX Server gave way to ESXi, the compact, dedicated hypervisor that underpins VMware’s vSphere platform today.

Type-2 hypervisors are less efficient, because they run on top of an existing operating system. For this reason, they are now largely restricted to client virtualization such as VMware Workstation, which lets a developer run one or more virtual machines on Windows or Linux desktops and laptops.

Other hypervisors commonly used for server virtualization include Xen, developed as an open source project in 2003. This was adopted by AWS to drive the virtual machine instances on its EC2 cloud service and is also the foundation for Citrix XenServer and XenDesktop.

A few years later in 2007, Linux gained its own hypervisor, in the shape of the Kernel-based Virtual Machine (KVM) project. This is so called because KVM is implemented as a kernel module that converts the Linux kernel into a bare metal hypervisor when loaded. Because it is effectively part of the kernel, KVM has grown to become the default option for many platforms that use virtualization, such as the OpenStack cloud framework, Apache CloudStack and most of the major Linux distributions.

The other major hypervisor in use today is Microsoft’s Hyper-V. This has been built into every version of Windows Server since 2008, and is thus widely adopted since the vast majority of organizations use Windows servers. Many organizations that came late to virtualization will have been tempted to simply build upon what is already implemented in Windows.

All of these hypervisors differ slightly in the way that they operate, but the end result is much the same; they divide up the resources of the host server in order to create multiple virtual machines, each of which behaves as though it were a bare metal server in its own right. Users can even migrate virtual machine images from one hypervisor to a different one by using the right tools.

Each virtual machine must be provisioned with its own operating system and virtual disk before it can do any useful work, and may also need to be separately provisioned with the application that it is to run. In an enterprise environment, this will typically be handled by a management tool such as Microsoft’s System Center Virtual Machine Manager (SCVMM), while in a cloud platform, this process is highly automated and driven by end users with a self-service provisioning tool or in response to some event.

…or containers

Containers are an alternative way of dividing up system resources, but these operate at the level of the operating system. Instead of being an entire virtual machine, a container is essentially just an isolated environment within a server that contains application code and any supporting code libraries that application depends upon. As containers do not need to include an entire operating system, they can be quickly created and moved between servers, and it is possible to operate many more containers than virtual machines on any given server.

Containers have been around for many years, but the current explosion in container uptake is due to Docker, which in 2013 launched its namesake platform that enables developers to quickly deploy code inside containers. The Docker philosophy is to break down applications into smaller modules that can be deployed and updated separately, which melds well with current notions of agile development and a microservices architecture.

In the cloud, containers, virtual machines and bare metal servers are all just ways for service providers to sell units of compute power to customers. Each has its individual merits and use cases, and these are not mutually exclusive; a virtual machine could be used to host an array of containers that make up an application, for example.

Bare metal servers provide extra performance for demanding or specialist workloads, and allow users to deploy software that may be licensed only for non-virtualized environments, or to use operating systems that are not supported in the cloud provider’s virtual machine catalog.

However, they are typically more work, as the user has to take on many tasks such as deploying and keeping the software up to date, which are managed by the cloud provider if you use a virtual machine.

The significance of AWS adding bare metal instances is that customers will be able to use these alongside the firm’s virtual machine and container services, all from one console. It simply adds a new choice into the mix for customers to use as they see fit, and that is what the cloud is really all about.

This article appeared in the February/March issue of DCD Magazine. Subscribe to the digital and print editions here: