From the perspective of the cloud service provider or hybrid cloud IT administrator, containers can be a wonderful thing. The ability to deliver a fully packaged, customized application to a customer, on demand, that can be scaled to meet the current needs of the user, and deployed in a fashion that is cost effective to the provider and the user would seem to address the basic reason for cloud-based applications.

With containers being significantly less resource demanding than virtual machines, providers are able to provide a higher level of service and offer more flexible and cost efficient services than is possible in a strictly VM environment. Even transitioning existing hardware from running VMs to containers can allow for a significant expansion of capabilities given the more efficient use of resources when compared to a strictly hypervisor based environment. And if there is no need for the same physical hardware to run multiple operating systems, the need for a full blown VM is mitigated and the use of application containers a more reasonable solution.

Shipping containers
– Thinkstock / Prasit Rodphan_1

Lightweight solution

Containers are also a very lightweight solution with exceptional portability. This makes it more practical to migrate from, for example, an internally hosted hybrid cloud to a public cloud service provider, or vice versa. Building test projects, deploying applications globally, updating and upgrading, all benefit from the ease of creating, using, and deploying containers. Which, of course, brings up the issue of how all these containers are managed, how deployment is orchestrated, and operation managed. In the IT infrastructure hierarchy, Containers-as-a-service (CaaS) slots in neatly between the traditional concepts of Platform-as-a-Service and Infrastructure-as-a-Service, supplementing or perhaps even replacing Software-as-a-Service, which has had to deal with issues of long-term contracts and financial accounting as it doesn’t clearly fit into the pay-as-you-go model that cloud services trumpet.

Containers-as-a-service (CaaS) slots in neatly between PaaS and IaaS, supplementing or perhaps even replacing SaaS 

With the Container-as-a-Service model the key to successful operation is being able to provide that management of the containers in an integrated way that behaves the same whether the containers are in your local data center or deployed to a cloud.

The major cloud service providers, Amazon, Google, and Microsoft all offer their own take on CaaS, while Docker, who brought the concept into the mainstream, recently released its Docker Datacenter offering, which supports both on and off premise deployments.

Amazon supports Docker

Amazon offers the Amazon EC2 Container service (ECS), marketed as a highly-scalable, high performance container management system that supports Docker containers. It is designed to allow the container applications to run on a managed cluster of Amazon EC2 instances and the service is provided at no additional charge over the standard cost of the EC2 resources. The system allows you to build and package applications on local server containers and migrate them to the EC2 instance, which will be running a Docker daemon, with the knowledge that the container application will have the same behavior and functionality running on the managed cluster as it did locally.

Amazon believes that the scalability is such that a user could move from a single container to hundreds of instances running thousands of containers without increasing the complexity of management for the user because ECS abstracts all of the infrastructure issues that would need to be dealt with allowing the ECS user to focus on the creation and operation of their containerized applications.

google cloud platform world lead
– Google / DCD

Google: Kubernetes manages Docker

For Google cluster management and orchestration of Docker containers is provided via the Google Container Engine. The engine is designed to support Kubernetes, the open source tool for containerized applications that allows the user to automate the deployment and operation of their containers as well as permitting the applications to be scaled on demand.

Because Kubernetes is open source it can be run on your hybrid or public clouds as well as on-premise, allowing the user to not simply build a containerized app then move it to Google’s infrastructure, but to also build their own infrastructure on whatever scale they feel is appropriate and be able to apply the same management and orchestration skills learned locally to the use of the Google Container Engine. Google aggressively updates the full Kubernetes suite to match the current release versions of the software. Like Amazon, Google provides a cluster management API, in this case the same as the Kubernetes technology.

Google pricing uses a flat fee per hour per cluster for the Container Engine; nodes in the cluster use Google Compute Engine instances and are priced accordingly, with charges ongoing until the nodes are explicitly deleted. Five nodes or fewer in a cluster incur no charges allowing users to prototype their deployments without additional expense.

Microsoft Azure
– Microsoft

Microsoft’s Azure cloud containers

Microsoft has offered Windows containers for over a year but just recently made its cloud container service generally available. The Azure Container Service gives the user the option of utilizing Apache Meos or Docker Swarm to scale and orchestrate applications and containers. Because ACS supports multiple container orchestration and scaling technologies, including Marathon and DC/OS the service exposes the API endpoints for the chosen technology. So if your experience with containers uses Docker Swarm, for example, you’ll be able to seamlessly move to ACS and make use of the same API interface you were previously familiar with. ACS is designed to allow the use of popular open source tools to monitor, control, visualize and run applications and containers. That same support for open source components allows ACS to have fully portable applications, not just containers.

ACS uses Azure pricing for required VMs and nodes but also has specific pricing based on container service instance sizing.

whale container docker security thinkstock photos liveostockimages square
– Thinkstock/liveostockimages

Docker Datacenter goes in-house

Docker Datacenter is designed to deliver CaaS within a data center or virtual private cloud. Unlike the Amazon, Google, or Microsoft offering, which are designed to take advantage of their respective pay-as-you-go cloud infrastructures, Docker Datacenter is designed to be deployed within your own infrastructure, giving the user many of the advantages in deployment, management, and orchestration that is being offered by those public cloud providers.

The CaaS infrastructure is built from other Docker components; Docker Engine to Docker Compose, Docker Swarm, Docker Trusted Registry and Docker Universal Control Plane. All connected via a pluggable architecture and open APIs. And since this aspect of Docker is a business they offer the Docker Datacenter subscription model, which allows the buyer to choose between business day support or 24x7x365 coverage. This is for a complete end-to0end fully supported CaaS platform for the users virtusal private cloud or data center.

While these are the highest profile CaaS solutions currently available there are still others in a rapidly growing marketplace offering both on-premise and cloud CaaS offerings. For an organization that is currently conserving deploying SaaS within their enterprise, taking a step back and considering the potential of CaaS to fill the same role at a lower price is definitely worth consideration. For business enterprises that are not cloud based but run consolidated applications, deployment of the CaaS capability within your existing environment is also worth consideration. In any case, CaaS is sure to play a major role in the business application space for the foreseeable future.