It’s no surprise that Docker reigns supreme over other container platforms. The platform sits at the centre of three aspects of modern development: the move to the cloud, microservices and DevOps. Its success is also down to timing: Docker arrived at a time when organisations were looking for a leaner means of deploying applications, beyond virtual machine technology.

Software-driven business

However, there’s a wider reason why Docker is the king of the development hill: the revolution taking place in business. Almost every business is transforming itself into a software business – think banks, airlines, telcos, and retail – and that digital revolution demands innovation in the speed and methods by which services are developed. Docker gives organisations the agility, flexibility and efficiency they need to innovate services at the speed of business.

Container-based development – and the related microservices – that Docker sits at the heart of is vital to the ‘modern software factory’, enabling developers to package up an application with all the required components, such as libraries and other dependencies, and ship it as one package.

From monolithic applications to microservices

Containers represent a huge leap forward. In the 1990s, monolithic applications and their services were tightly coupled and intolerant to change – thereby inhibiting agility. A modification made to a small section of an application, for example, might require deploying an entirely new version, and scalability often demanded scaling the entire application – not just the desired components.

Think of these monolithic services as compared to something we encounter in the real world: like a wedding cake. At a wedding, everyone is served with the cake at a precisely anointed time, all the guests get the same flavour, and it’s all synchronised. Microservices, meanwhile, might be compared to cupcakes – agile, loosely coupled, and independent. Cupcakes sit separately on the same plate; you can take a cake whenever you choose, pick any flavour, and crucially, those cupcakes can be replaced independently of one another. This is the modern world of containers.

Containers and microservices offer significant advantages over virtualisation. They share certain system resources, making them leaner and reducing system overhead. Compared to virtualized services, containers can be spun up and copied quickly, resulting in faster and easier scaling. Finally, almost all major container offerings are open source, which avoids vendor lock-in.

The challenge of monitoring Docker

Like every great opportunity, Docker also has its challenges – especially when it comes to monitoring service levels and the performance of applications running in Docker environments. First, Docker introduces a new layer between hosts and applications that traditional tools don’t cover. This lack of visibility can expose the business to performance issues and unanticipated downtime. In response, organisations need to establish specialised monitoring of Docker hosts and containers.

Second, these containers and microservices are ephemeral – they are short-lived and intended to be thrown away quickly. This factor, combined with API-centric communications, results in a massive increase in objects, dependencies and metrics – challenging traditional monitoring approaches, such as topology mapping and instrumentation.

Third, most organisations will already have tools to monitor their existing infrastructures. The addition of Docker-specific monitoring platforms results in multiple, fragmented monitoring tools which can quickly lead to disjointed alerting, delayed troubleshooting, and difficulties in allocating workloads.

Finally, although Docker environments can potentially deliver massive on-demand scalability, this can result in highly dynamic, difficult-to-manage environments. Unless monitoring keeps pace, IT teams risk having gaps in visibility which can ultimately result in downtime or other issues.

How to future-proof Docker with modern monitoring

So what should you look for in a Docker monitoring solution?

1. Fast, simple and easy-to-use

Explore agentless approaches that continuously detect, discover, model and map containers and clusters. Ensure that full-scale monitoring is already integrated into Docker apps and that it provides automated Docker attributes and Docker perspectives.

2. Deep, actionable monitoring insights

Choose a monitoring technology that delivers insights into the health and performance of containers, together with deep app performance and analysis. Ensure the technology can automatically detect and isolate products to correct application tiers.

3. Hyper performance delivered at scale

You need to deal with ‘data density’ – aggregating millions of data points so you can quickly detect and diagnose problems across Docker containers, clusters, applications, user experience and infrastructure. For this, your monitoring solution must deliver a comprehensive view into performance, sharing that insight across multi-functional teams.

Make no mistake. Docker containers are lightweight, run on any computer, on any infrastructure, in any cloud and are ideal for microservices. However, to innovate faster and accelerate your digital transformation, you need a Docker monitoring solution that is fast and easy to use, provides actionable insights, and which works seamlessly into existing workflows while providing full-stack support for all the relevant container-related technologies.

This is where a modern software factory approach can help you with an integrated toolset to build, test, monitor and secure containers at scale. The objective is to making developer’s lives easier across the software development lifecycle and helping DevOps/Ops teams deliver exceptional end-user experience.

Chris Kline is vice-president of DevOps Strategy at CA Technologies