According to research carried out by analyst outfit 451 Group, running an OpenStack deployment is currently more expensive than buying in cloud services from VMware, Red Hat or Microsoft. This is primarily, it says, because of the higher cost of paying OpenStack engineers.

The 451 Group has carried out an analysis of the cost of cloud computing, examining the often complex pricing models of both private and public clouds. According to the survey: “…equivalent solutions from commercial software vendors have TCO advantages over OpenStack due to the high cost and low supply of skilled OpenStack engineers.”

The report also notes that OpenStack distributions can provide a TCO advantage over the DIY approach for a typical small-scale enterprise deployment, but only where the use of a distribution results in a 45 percent manpower saving.

“Finding an OpenStack engineer is a tough and expensive task that is impacting today’s cloud-buying decisions,” said Dr. Owen Rogers, senior analyst, 451 Research. “Commercial offerings, OpenStack distributions and managed services all have their strengths and weaknesses, but the important factors are features, enterprise readiness and the availability of specialists who understand how to keep a deployment operational. Buyers need to balance all of these aspects with a long-term strategic view – as well as TCO – to determine the best course of action for their needs.

“Decisions also need to include the risks associated with lock-in should prices rise, support dwindle or features be decommissioned,” Rogers added. “As OpenStack matures and the pool of available engineering staff increases, buyers can expect the TCO of deploying OpenStack to improve.”

SolarWinds has purchased cloud-based log management company Papertrail for $41 million
Which cloud option will be cheaper, better and easier to implement? – Thinkstock / Romolo Tavani

What’s new in OpenStack

None of this is new and the problem may well be with the perception of what OpenStack is and can do rather than its ‘weaknesses’.

Writing for his blog this February, Randy Bias, OpenStack pioneer, explained: “One of the biggest failures of OpenStack to date is expectation setting. New potential OpenStack users and customers come into OpenStack and expect to find:

  • A uniform, monolithic cloud operating system (like Linux)
  • Set of well-integrated and interoperable components
  • Interoperability with their own vendors of choice in hardware, software, and public cloud

Unfortunately, none of this exists and probably none of it should have ever been expected since OpenStack won’t ever become a unified cloud operating system. The problem can be summed up by a request I still see regularly from customers: “I want ‘vanilla’ OpenStack’” The fact is that Vanilla OpenStack does not exist, never has existed, and never will exist.”

Bias continues: “I am trying to put a pragmatic face on what is a very challenging problem: how do you get to the next generation of data center? We all believe OpenStack is the cornerstone of such an effort. Unfortunately, OpenStack itself is not a single monolithic turnkey system. It’s a set of interrelated but not always dependent projects. A set of projects that is increasing rapidly and your own business probably needs only a subset of all the projects, at least initially.

That means being realistic about what can be accomplished and what is a pipe dream. Follow these guidelines and you’ll get there. But whatever you do, don’t ask for ’vanilla OpenStack’.”

The OpenStack road, like any route to a new data center architecture, will not be an easy one.