Organizations are increasingly adopting cloud solutions at the expense of traditional IT. Industry research firm IDC estimates that cloud infrastructure – both public and private – will account for more than 40 percent of the IT infrastructure spending by 2019. Other estimates are much more aggressive in their predictions about cloud overtaking traditional IT.
What’s driving this trend? The technologies and processes that have developed in IT environments over the last several decades don’t support the agility needed to succeed in the new business environment where innovation is king. Traditional IT models have long deployment cycles and slow provisioning times, rigid capacity planning, large upfront costs, and inefficient resource siloes.
Source: Thinkstock / boytsov
Public cloud problems
The cloud model has been able to overcome these challenges and offer businesses a more efficient, agile, and easily scalable solution. It also promises better support and greater cost savings for businesses, as it has been characterized by automated, self-service provisioning, elastic on-demand capacity, metered usage and chargebacks, and a single, shared resource pool to support all workloads.
For IT consumers (application owners and line of business executives), a cloud model offers greater freedom to experiment through low-risk trial and error, rapid time to value, and the flexibility to pay only for what they consume.
Getting started in a public cloud can be easy, but it comes with tradeoffs
For IT providers (internal IT infrastructure teams), cloud offers reduced costs and higher utilization through resource pooling, and greater responsiveness and agility to business needs. In addition, it allows IT to move from a traditional cost-center model to a service provider model, with a chargeback or “showbacks” cost model aligning IT investments with end-user demand.
Great. Let’s all go to the cloud!
Not so fast. Moving your data center to a cloud model isn’t as easy as it sounds.
Getting started in a public cloud can be easy, but it comes with tradeoffs. For one thing, giving end-users free reign to self-provision can lead to runaway costs and sticker shock when the bill arrives. Moving to public cloud requires a close eye on consumption because of the inherent unpredictability of costs. Other downsides could include unpredictable performance and aggregated service level agreements (SLAs) that average performance and uptime stats over long time periods – your results may vary. That’s fine for test environments but typically doesn’t fly when it comes to mission-critical applications.
And of course, there is always the question of privacy and governance. Depending on regulations and other compliance requirements, moving to public cloud may not be an option at all.
Is private cloud the answer?
Building a private cloud sounds like the easy answer to all IT problems, but things aren’t always as they seem.
In fact, Gartner has observed a significant disenchantment with private cloud users as of late. According to a Gartner report, Internal Private Cloud Is Not for Most Mainstream Enterprises, 95 percent of private cloud implementations go wrong, Implementations are failing to deliver on the promised benefits, says the author, Thomas Bittman, and success requires a huge investment in new technology and new ways of thinking, along with big organizational changes. Few organizations are finding they can justify such a large investment.
It’s time to rethink our approach to private cloud
To succeed in a private cloud deployment, you need to pick the right foundation. And that means choosing the right infrastructure. Traditional IT, as we know it today, was optimized for performance and quality of service, not for agility, ease of scale, and simplicity. Anyone attempting to deliver a private cloud infrastructure on the traditional stack will quickly find themselves up to their ears in specialists and will watch their implementation plans move further and further out of reach. Traditional IT simply wasn’t designed to deliver the cloud capabilities of large public cloud providers.
To deliver the cloud capabilities we think of today, the large cloud innovators threw out the old model and started from scratch. This started the so called “web-scale” architecture. Using design principles like software-only, commodity hardware, scale-out, and globally distributed resource pools, these solutions were optimized for scale, simplicity, and low-cost. However, these came with their own set of compromises.
These web-scale infrastructures lacked the consistent and predictable performance, protection, and resiliency organizations had come to expect with traditional IT. Cloud is great at scaling workloads, but not as adept at guaranteeing performance levels, uptime, and quality of service. This is why many enterprises opt to keep their most precious applications in-house instead of in the cloud.
Best of both?
But what if you could have both? What if you could get the simplicity, scale, and cost-savings of “web-scale,” with the predictable performance and enterprise protection of the traditional best-of-breed IT infrastructure stack?
This is where integrated solutions come into play. These data center technologies have the ultimate goal of simplifying and streamlining IT operations. Hyperconverged infrastructure, for example, simplifies the data center by assimilating all IT functions and services below the hypervisor into commodity x86 building blocks to provide a single shared resource pool across the entire IT stack, which eliminates the need for point products and inefficient siloed IT architectures.
To see how hyperconvergence and public cloud stack up, TechTarget surveyed IT professionals in Western Europe to assess which technology was the preferred option, and released the results in their report titled The Battle for Virtualized Workloads: Hyperconverged Infrastructure Versus Public Cloud Computing, The survey found results that many would find surprising, noting that though the top reasons for deploying, or considering deploying, both hyperconverged infrastructure and public cloud computing are almost identical, the majority of respondents felt that hyperconverged infrastructure was a better choice than cloud when looking to modernize their IT environment. Contributing to this decision was that survey respondents felt hyperconverged infrastructure was less disruptive to IT operations, and also offered simplified management.
While the public cloud does offer the ability to scale quickly, effectively, and affordably, it seems hyperconverged infrastructure is able to offer similar capabilities. In fact, a recent independent study from Evaluator Group found that hyperconverged infrastructure vendor, SimpliVity, offered a three-year total cost of ownership savings of 22 percent to 49 percent over public cloud provider, Amazon Web Services, while still offering enterprise performance, resiliency, agility, elasticity, and efficiency.
The ability to offer the best parts of both traditional IT and the cloud in a single solution is what is attractive to many IT professionals about integrated solutions. If you could get the benefits of the past and the present combined into a single solution, and all at a price point that sets the standard for the future, why wouldn’t you?
Rich Kucharski is vice president of solutions architecture at SimpliVity