The IT landscape is changing fast, forcing vendors and distributors to shift focus from supply to service. As businesses move from a run rate to an annuity business, the marketplace is littered with casualties, from failed resellers and MSPs to M&A activity which is consolidating the market still further.
Even longstanding behemoths like Avnet were not immune to being subsumed into more aggressive, agile tech distributors. And the prize? Ever decreasing hardware and software opportunities as companies reach the end of their current IT equipment cycle and begin to switch to smaller, more agile technologies and to embrace the emergence of hyperconverged computing.
An appliance way of thinking
The attraction of hyperconverged platforms is that they offer the user a solution that is modular in design, much like an appliance which can replace the entire data center stack below the hypervisor and reduce the number of devices required to service the business by as much as 7:1 according to Forrester Research.
It also appeals to organizations that want to go down the “one throat to choke” route of having a vendor who is responsible for all the elements in the IT stack.
This approach can represent a massive saving for the enterprise but equally a massive problem for the companies that sell hardware and the data centers that house it as it reduces the potential footprint owned by the enterprise.
Building blocks
The wave of hyperconverged infrastructure has the core principles of virtualization at its foundation by using virtual machines and the well-developed toolsets around the virtualization layer to support key resources such as networking and storage. It also simplifies the operation of the stack by allowing the control of the environment from load balancing and workload provisioning through to data replication, deduplication and backup using individual VM’s at the core of the process.
Once the reliance on particular hardware stacks of software configurations has been achieved this VM-centric view of the estate can enable workloads to be moved back and forth between different clouds and data centers with ease and transparently to the business.
Why do it?
The technical argument for hyper convergence is one of lowering the management overhead and acquisition cost while improving the utilization of the platform compared to legacy island-based infrastructure. By combining server and storage together it eliminates SAN (storage area network) storage by turning this into a virtualised layer which talks directly with the hypervisor of choice. Technical people like it as it reduces the risk of wrecking individual VMs’ performance through I/O contention when adopting the fundamental concept of virtualization which is the pooling of resources.
The choice for the consumer will widen and the race to the bottom for hardware vendors and distributors will accelerate as their value add is eroded
This concept of pooling transcends where the physical resources are, as separate hyperconverged systems can be managed as if they were deployed in the same cabinet. For the technical team the ability to control and manage an enterprise’s resources spread across the world from an integrated single pane of glass is the IT manager’s Utopian dream.
The CxO and business argument for converged IT is also very compelling. The consumption of hybrid clouds is very much on the minds of the operational and the financial CxO’s who still run on premise IT and who constantly evaluate where the tipping point is between on premise and private/public cloud models. Their challenge is to be able to quickly and transparently shift workload from one resource to another satisfying a range of business issues; from the need to reduce the costs in a particular business sector to remain competitive in the market, to delivering a business continuity plan to the auditors or compliance officers which proves the critical business applications can be moved at will and with minimal organizational impact.
What could go wrong?
It’s all sounding great isn’t it? But take note - converged infrastructure has some limitations.
Firstly, it’s not a silver bullet as the discussion above suggests. There is a fair amount of preparatory work to get done before it’s possible to flip the hyperconverged switch. The virtualization of the IT stack needs to be designed to adopt the idea of a VM-centric operation. This means ruthlessly looking at where current legacy equipment is deployed and ensuring that vendors vying for the next tech refresh will fit into your hyperconverged model. Failure to do this will result in the spawning of a group of technology islands. some capable of being hyperconverged and some not. They will need to be connected together traditionally and this will reduce the benefits of the solution.
The final thought on hyper convergence: well, it’s most certainly the future for many types of workload but won’t take over the world. It will continue to fuel a new breed of vendor who uses commodity priced, well established hardware components to layer their own IP on top and sell as a hyperconverged appliance. The choice for the consumer will widen and the race to the bottom for hardware vendors and distributors will accelerate as their value add is eroded and will only be slowed if they can lock in the customers to a specific application stack as part of the solution.
Steve Groom is CEO of Vissensa, a UK managed service provider