Many enterprises want the agility and cost-effectiveness of cloud infrastructure, but they balk at the complexity and cost of implementing a cloud in-house. In addition, enterprises need the ability to scale cloud deployments to any level, along with the flexibility to manage compute and storage resources of many types. In this article, we’ll look at these requirements for successfully deploying and managing a private or hybrid cloud.

Making cloud simple

Enterprises want to adopt on-premises clouds, but they don’t want to spend weeks or months developing them, and they don’t want to have to hire a team of OpenStack or VMware experts. Essentially, enterprises want the same point-and-click provisioning they get with AWS or other public clouds, but with the governance, scalable performance and control only an on-premises cloud can provide.

There are three areas to consider:

General ease of use – The cloud platform should incorporate an intuitive user interface that enables self-service by end users, thereby reducing the support load on the IT department. However, self-service needs to be governed: the system should incorporate workflow management and role-based access control (RBAC) so the IT team can set parameters to make sure self-service takes place in a fully controlled, auditable way. For example, the cloud platform could require permission before a user can create or delete a virtual server, or add resources to a virtual server.

Deployment – The cloud software should leverage a bootstrap system that includes its own hypervisor and can be installed on a server and be up and running within 30 minutes. Clouds that come pre-installed on hyper-converged platforms are even easier to deploy. IT managers should be able to discover hypervisors or compute nodes over the network and then have the cloud automatically come up, so there’s no labor involved in installing operating systems or hypervisors.

Configuration and management – The simplest clouds incorporate self-discovery, auto-configuration, and unified management. User interfaces vary widely among cloud platforms, but the simplest ones will allow a single view of all infrastructure components – even across geographies – and will allow point-and-click provisioning. Since operating a cloud essentially means controlling access to a pool of resources, a cloud’s management system should use role-based access control (RBAC) to allow administrators to control access to servers, VMs, storage assets, network bandwidth, and even CPU cores. The more granular a management system is, the better a company can optimize its resources and control costs.

For example, a comprehensive management system should enable cloud managers to require approvals for a range of user actions. Whenever users make requests that impact resource utilization, cost, or the availability of a virtual server or application (such as virtual machine creation or destruction, or a request to add CPU, RAM or storage resources to a virtual machine) managers can require approval for those requests.


Enterprises should be able to scale clouds using whatever resources they have on hand. Rather than being constrained to buying a certain number of a vendor’s nodes to scale up, IT managers should be able to scale using a single server or a single disk, be it SATA, SAS, SSD, or NVME. This kind of flexibility allows companies to optimize resources by using what’s on hand and to thereby minimize costs.

In addition, many companies have geographically-distributed operations and want to be able to scale clouds using resources in different data centers, or they want to be able to use multiple clouds. For example, an administrator in a New York data center may need 50 additional servers to scale, and an administrator in Chicago might have those servers sitting idle. The cloud platform should allow companies to scale across nodes in different geographical locations, or to similar clouds in different geographical locations, all managed through a single user interface.

Furthermore, many cloud platforms place strict limits on the number of nodes that can be incorporated into a cloud. Some platforms are limited to 32 or 64 nodes, for example. The ideal cloud platform should permit scaling to hundreds of nodes.

Finally, the cloud platform should incorporate software-defined storage, which should enable asymmetric scaling across different types of disks for maximum scaling flexibility.


The platform should be able to run on any server, any storage, and any network interface. Most companies want to use commodity hardware, and the cloud should permit that. Some companies want to implement clouds on hyper-converged platforms, and that should be possible as well.


Hardware and a cloud software license are just a starting point for a cloud’s total cost of ownership (TCO), but they’re an important first place to consider savings. Some cloud software can be licensed at a fraction of the cost of other products, and even hyper-converged platforms vary widely in price.

The flexibility to use existing commodity hardware helps companies optimize their resources and reduce costs by avoiding new CapEx. If they use software-defined storage, for example, they don’t have to buy a new SAN or new hardware to implement a cloud.

A final area of TCO to consider is professional services and custom development. Here again, pricing varies widely between cloud platform providers.

There are many paths to on-premises cloud, featuring many products with many different price points and feature sets. By focusing on the considerations discussed here, enterprises can find the cloud that best fits their needs.

Jim Freeman is senior cloud architect at OnApp, where he leads the sales engineering team and works with customers and partners on cloud solutions design, implementation, support and training