Cookie policy: This site uses cookies (small files stored on your computer) to simplify and improve your experience of this website. Cookies are small text files stored on the device you are using to access this website. For more information on how we use and manage cookies please take a look at our privacy and cookie policies. Some parts of the site may not work properly if you choose not to accept cookies.

sections

The truth about hyperconvergence

  • Print
  • Share
  • Comment
  • Save

Hyperconverged infrastructure started as a reaction against the traditional IT stack. Traditional IT had been shaped out of necessity. Various components had to be added after-the-fact as Band-Aids to cover up existing problems in real-time and these components combined to form what could be called an “accidental architecture.” In part because of all of these new components and add-ons, traditional IT had become expensive to acquire and maintain, complicated to manage, and it didn’t provide the elasticity or agility necessary to allow an organization to scale to respond to changing business needs.

Hyperconverged infrastructure vendors, like SimpliVity, are built on solid foundations which include data acceleration, global unified management, and built-in data protection. These features enable hyperconverged vendors to offer a complete solution that doesn’t need outside components to make a whole stack. With hyperconvergence, everything is built in natively. While the traditional IT stack employed separate components to act as medicines for symptoms of a larger problem, hyperconvergence acts as the cure for the problem itself.

What hyperconvergence isn’t

Traditional IT

Traditional IT stack

Source: Thinkstock / Geoff Kuchera

In the decade or so since virtualization became mainstream, a lot has happened in the data center infrastructure market. Virtualization broke new ground in the move toward moveable workloads. But it didn’t go as far as separating storage from hardware, which is where hyperconvergence really made its mark.

Traditional, shared, hard-drive storage is expensive and inefficient. Cloud-based applications frequently don’t need that kind of persistent storage space, or servers beyond the solid x86 ones that are easily available. Virtualized workloads are still tied to storage, and they require manual provisioning to provide resources for peak performance, which is wasteful and inefficient.

Hyperconvergence can be thought of as the final wave in the evolution of convergence. The first wave of convergence combined servers and storage using a single vendor delivery model. This approach focused solely on the top half of the traditional IT stack. The second wave of convergence expanded upon the first wave’s approach as it packaged servers and storage into scalable x86 building blocks. The third wave is where hyperconvergence came in.

What hyperconvergence is

Hyperconvergence is the result of years of technical development; it’s much more than a packaging exercise. The first waves of convergence did some work around using servers in a smarter way, but hyperconvergence brought real sophistication to the market. It actually abstracts hardware from the VMs and applications running on top of it, bringing the concept of agility and mobility a big step further.

So hyperconverged infrastructure isn’t just a commodity box. It uses x86 servers as a foundation, but adds a huge range of functionality so enterprises can actually build growing businesses on it. Hyperconverged infrastructure includes things like built-in data protection, unified management, and data acceleration techniques that kick-in right from the start of the data lifecycle.

The result is an IT infrastructure that provides the best of on-premises IT: incredible performance, protection, and resiliency, along with the best aspects of the cloud: efficiency, elasticity, and cost-effectiveness. In fact, a recent study by Evaluator Group found that hyperconvergence vendor SimpliVity was actually more cost-effective than public cloud vendor Amazon Web Services, offering a 22% - 49% TCO savings over three years.

In all, hyperconverged infrastructure consolidates all IT components below the hypervisor into a single solution. Those components were all pieces of hardware or software that were patching up symptoms of the old, inefficient data center. They worked well when they had to, but as a whole, the data center was—and for many enterprises, still is—an amalgamation of siloes and products, each with its own purpose. It’s a patchwork of functionality and technology that’s been limping along for years, but is starting to show its wear.

So hyperconverged infrastructure isn’t a packaging exercise, and it isn’t another add-on piece of a data center infrastructure. It leaves behind old ideas about refresh cycles and the relationships between different pieces of IT architecture. Instead, it brings a new way to create a just-in-time, responsive infrastructure that doesn’t bog the business down.

Jesse St Laurent is the vice president of Product Strategy at SimpliVity.

Have your say

Please view our terms and conditions before submitting your comment.

required
required
required
required
  • Print
  • Share
  • Comment
  • Save

Webinars

More link