It is indisputable that containers are one of the hottest tickets in open source technology, with 451 Research projecting more than 250 percent growth in the market from 2016 to 2020. It’s easy to see why, when container technology has the ability to combine speed and density with the security of traditional virtual machines and requires far smaller operating systems in order to run.
Of course, it’s still early days and similar question marks faced OpenStack technology on its path to market maturity and widespread revenue generation. Customers are still asking “can any of this container stuff actually be used securely in production for an enterprise environment?”
From virtual machines to containers
First, some background. In previous years, virtual machines have been able to provide a solution to workflow expansion and cost reduction for many companies, but they do have limits.
For example, virtual machines have a far smaller capacity compared to containers, in terms of the amount of applications they are able to put into a single physical server. Virtual machines also use up system resources; each virtual machine runs on a full copy of an operating system, as well as a virtual copy of all the hardware that the operating system needs in order to function.
Containers offer a new form of visualization, providing almost equivalent levels of resource isolation as a traditional hypervisor. However, containers present lower overhead both in terms of lower memory footprint and higher efficiency. This means that higher density can be achieved – simply put, you can get more for the same hardware.
Enterprise adoption
The telco industry has been at the bleeding edge of adopting container technology. Part of the catalyst for this trend has been the NFV (network function visualization) revolution – the concept of telcos shifting what were traditionally welded-shut proprietary hardware appliances into virtual machines.
We certainly do see virtual machines being used in production in some telcos, but containers are actually a stronger fit in some cases; the performance is even better when it comes to NFV applications.
Developers in enterprise environments are aware that containers offer both higher performance to the end user, as well as operational efficiency for the cloud administrator. However, many CIOs are still unsure that containers are the best option of technology for them, due to wider market misconceptions. For example, some believe that by using one particular type of container, they are going to tie themselves into a specific vendor.
Security worries
Another common misconception that might present an obstacle to enterprise adoption is the concept of security. However, there are several controls in place that enable us to say, with confidence, that an LXD container is more than secure enough to satisfy the CIO that is, understandably, more security-conscious than ever.
One of these is resource control, which, inside of a Linux kernel, is provided by a technology called cgroups (control groups), originally engineered at Google in 2006. Cgroups is the fundamental technology inside of a Linux kernel that groups processes in a certain way, ensuring that those processes are tightly coupled. This is essentially what a Docker or LXD container is – an illusion that the Linux kernel creates around the group of processes that makes them look like they belong together.
Within LXD and Docker, cgroups allows you to assign certain limiting parameters, for example, CPU, disk storage or throughput. Therefore, you can keep one container from taking all of the resources away from other containers. From a security perspective, this is what ensures that a given container cannot perform a denial of service (DDoS) attack against other containers alongside it, thereby providing quality of service guarantees.
Mandatory access control (MAC) also ensures that neither the container code itself, nor the code run within the containers, has a greater degree of access than the process itself requires, so the privileges granted to rogue or compromised process are minimized.
In essence, the greatest security strength of containers is isolation. Container technology can offer hardware-guaranteed security to ensure that each containerized machine cannot access one another. There may be situations where a virtual machine is required for particularly sensitive data, but for the most part containers deliver security.
IoT is leading the way
Many of the notable trends dominating the tech news agenda in the last couple of years, particularly the Internet of Things, are pushing the shift towards enterprise adoption of containers. Container technology is arguably the ideal response to scalability and data-related issues presented by the predominance of IoT applications.
Containers, in tandem with edge computing, are optimized for enabling the transmission of data between connected devices and the cloud. Harvesting data from any number of remote devices and processing it calls for extreme scaling. Application containers, with the help of tools such as Docker and Ubuntu Core, which runs app packages for IoT known as “snaps,” can help provide this.
Why containers?
Container technology has brought about a step-change in visualization technology. Organizations implementing containers see considerable opportunities to improve agility, efficiency, speed, and manageability within their IT environments. Containers promise to improve data center efficiency and performance without having to make additional investments in hardware or infrastructure.
For Linux-on-Linux workloads containers can offer a faster, more efficient and cost effective way to create an infrastructure. Companies using these technologies can take advantage of brand-new code, written using modern advances in technology and development discipline.
We see a lot of small to medium organizations adopting container technology as they develop from scratch, but established enterprises of all sizes, and in all industries, can channel this spirit of disruption to keep up with the more agile and scalable new kids on the block.
Marco Ceppi is Ubuntu product strategist at Canonical