IT infrastructure can be difficult to manage, particularly at scale, but the trend over time has been towards greater complexity. Meanwhile, IT budgets have either stagnated or been cut in real terms, leading administrators to seek out solutions that will help them cope with the increased demands of modern applications and users.

Hyperconverged infrastructure is one such approach, having already been adopted by countless organizations as a convenient way to deploy applications such as virtual desktop infrastructure (VDI) or operating Hadoop clusters for running analytics workloads. Hyperconverged infrastructure is an easily deployed point solution for such requirements, but it could have broader use cases that will see it become more widely adopted.

What’s in the box?

Simplivity, one of the leading hyperconverged vendors, recently signed a deal to transform a Fortune 50 financial services firm’s data center, moving the entire range of workloads on to Simplivity’s platform. This reflects a shift that has been apparent for the past year or so, according to Roy Illsley, principal analyst for infrastructure solutions at Ovum: “Hyperconverged infrastructure is now finding a wider market than just the VDI [virtual desktop infrastructure] and SMB use cases where it seemed to have been lumped. We’re seeing a big shift to enterprises using hyperconverged for running a private cloud internally, because it’s simpler to manage and you can just get your virtualization admin guy to look after it.”

$5bn - expected market cap for hyperconverged hardware market in 2019

Gartner

For those unfamiliar with hyperconverged infrastructure, it is based around the concept of an appliance-like node – typically a 1U or 2U “pizza box” rack-mount chassis – that integrates compute storage and networking with centralized management.

Also implicit in hyperconverged infrastructure is a focus on virtual machines, although some vendors, such as Nutanix, are now adapting their platforms to support workloads running as microservices in containers.

These appliance-like nodes are designed to be used as infrastructure building blocks, enabling the user to scale as required by simply adding more nodes, which in reality are little more than x86 servers fitted with direct attached storage, at least in terms of their hardware.

The difference between a hyperconverged system and any old x86 server stuffed with internal storage is the software. This typically includes a ‘single pane of glass’ management console through which to operate the cluster, plus a software-defined storage layer that creates a shared pool of storage across multiple nodes in a cluster.

This is what makes hyperconverged infrastructure such an ideal platform for VDI deployments. Virtual desktops can place heavy demands on storage, and so it makes sense for the compute and storage to be coupled as closely as possible, rather than relying on a more traditional storage area network (SAN), for example.

By contrast, plain converged infrastructure, though it is also aimed at simplifying IT delivery, offers pre-integrated systems composed of traditional server, storage and networking products.

Use responsibly

A can of sardines
– Thinkstock / Jultud

One criticism of hyperconverged infrastructure is that this software layer is often proprietary and unique to a specific vendor, and therefore carries the potential risk of a lock-in for the customer. Whether this risk is any greater than it might be when selecting Microsoft Windows or an Oracle database, or any other enterprise software, is open to debate, and organizations may decide that the benefits of easier deployment and management may make such a risk worthwhile.

With proprietary management tools, a hyperconverged deployment could easily become yet another silo within the data center for the IT department to maintain, unless the organization intends to use it to replace other infrastructure.

Hyperconverged systems can also be a challenge to the structure of an organization’s existing IT departments (see box). If you have teams dedicated to servers, storage, networks and security, which one gets to manage the box that combines all four?

Another criticism is that the software-defined storage layer may not be as mature as traditional enterprise storage systems, and may be lacking key capabilities required by enterprise customers, such as quality of service or data replication support.

Some hyperconverged vendors have now been through several years of development and improvement, and have begun to fill in the gaps. Simplivity, for example, claims to include a full range of data center features in its OmniStack software, including backup, deduplication and wide area network (WAN) data optimization.

Other issues may be cultural. With integrated compute, storage and virtualization, it may not be clear which IT team within an organization should be given responsibility for hyperconverged infrastructure, and this could lead to friction. But it is still relatively early days for the hyperconverged infrastructure market, and changes are still happening. One trend, for example, is for the software platform to be separated from the hardware, purportedly in the interests of offering greater customer choice.

Art of the deal

The two largest hyperconverged firms – Nutanix and Simplivity – have struck deals with server vendors such as Dell and Lenovo, enabling them to offer a hyperconverged solution based around partners’ server hardware. The upshot is that customers interested in deploying hyperconverged infrastructure can procure from a supplier they may already have a relationship with, while Nutanix and Simplivity are free to focus more on their software platform.

Likewise, VMware seems to have quietly sidelined its own EVO:RAIL hyperconverged product and is instead touting a software stack comprising the latest version of its virtual SAN software-defined storage platform, along with vSphere and vCenter. This can be seen in the VxRail system that was launched earlier in 2006 by VCE, another division of the EMC federation, along with VMware.

However, Illsley believes the biggest challenge for hyperconverged vendors is in the way they scale, by adding new nodes to the infrastructure. While this is part of the simplicity of the hyperconverged approach, adding an entirely new node can lead to some resources being underutilized if a workload simply requires more memory, for example.

In response, some vendors now allow buyers to customize the processor, memory and storage configuration inside nodes, but this complicates the procurement process and moves away from the simple drop-in-and-go appliance-like marketing message.

Nevertheless, Illsley expects to see adoption of hyperconverged infrastructure grow for at least the next few years, to the level where it may even account for 15-20 percent of data center infrastructure spending. “It will continue to grow for the next few years,” he says, “by which time it will have become just another part of the data center, and by then something else will probably have replaced it as the next big thing.”

This article appeared in the September issue of DatacenterDynamics magazine.