Archived Content

The following content is from an older version of this website, and may not display correctly.

It’s a concept applications developers, along with many data center specialists, wish had been invented years earlier: a self-contained application that includes lightweight versions of all the dependencies and libraries it needs to run in any system on which it’s deployed... or rather, for now, any Linux system.

And up to now, one of Linux’ biggest shortfalls is that developing for one distribution can endanger the application from being able to run on all the others, because of each distro’s unique dependencies.

At the premier conference Monday morning in San Francisco for fast-growing Docker, Inc. (until recently known as dotCloud), the firm rolled out the first version 1.0 RTM edition of Docker, a tool for packaging and distributing completely self-contained applications. The company said the new designation will enable customers to deploy and run Docker apps with certification and full product support, in production environments.

“This is important technology for the evolution of PaaS (Platform-as-a-Service),” Al Hilwa, IDC’s program director for software development research told FOCUS. “It is an important way to get standardization at the sub-virtual machine level, allowing portable apps to be packaged in a lightweight fashion and easily and reliably be consumed by PaaS clouds everywhere. The level of ecosystem support Docker has gained is stunning, and it speaks to the need for this kind of technology in the market and the value it provides.”

The whole point of remote procedure calls, dating back to their origins in UNIX, was to utilize reusable code.  That code could be distributed in libraries and standardized by operating systems so the management of those libraries would not become unbearable.

But libraries helped to harden the behavioral boundaries preventing interoperability. Sun Microsystems’ Java was formed to cross those boundaries, by implementing a managed runtime that handled the discourses between remote procedures just before execution, or just-in-time (JIT). Microsoft soon followed with the .NET Framework, which began as a Windows-only implementation, but has since branched out to Linux and Mac through a partnership between Microsoft and independent developer Xamarin.

PaaS platforms such has Heroku have heralded a third epoch of cross-platform development, where sophisticated dynamic languages can run the compute-intensive portions of apps at runtime on the server, and render the UX through a client-side browser. This is the evolutionary wave of distributed service development that has picked up the greatest momentum in recent years.

But containerized apps are catching up fast. While Java and .NET applications each utilize their own respective stand-alone runtime package, and conventional Windows or Linux applications place procedure calls to external libraries, a Docker app includes its own built-in runtime package. These apps are hosted on the company’s own PaaS, which its founder Solomon Hykes openly characterizes as competitive with Salesforce’s Heroku.

Hykes says he chose Docker’s shipping containers metaphor because of how shipping and logistics companies solved the problem of moving multiple classes of goods across continents. In many cases, the shipping container helped shippers to rethink the design of the goods being shipped. That same lesson can apply to vendors who have, in recent years, taken to distributing their apps as virtual machines — complete with operating systems and dependent libraries — in order to ensure their reliability.

In fact, some developers had begun taking to building VirtualBox VMs with  minimalist Linux distros such as Chakra or Linux Live — which is designed to be run from a USB stick — plus server-side runtimes such as Node.JS. This way, users could set up their VM in their datacenter or in a public cloud, and run their apps through a browser.

A Docker package is technically a virtual machine, though with a shrunken set of libraries, and without those other libraries, shared code, and other overhead from the operating system that the application doesn’t use. What’s more, later versions of applications deployed through Docker’s PaaS-based hub can incorporate only the added dependencies necessary to run the new code, if any.

There was no greater sign of the strength and diversity of the Docker ecosystem Monday than the announcement by none other than Microsoft that it will be supporting multiple Docker hosts per user on its Azure cloud. If you’re wondering what this announcement has to do with Windows... it doesn’t.

In a blog post Monday from the conference, the company’s Open Technologies representative Ross Gardler said Azure’s Docker deployment tool will support the standard Docker client tools that developers already use to manage and maintain their deployments. This way, configuration can take place at the client side, communicating with the Azure host in the background.

“Docker is complementary to other platform technologies in that you can package any workload in it,” IDC’s Hilwa says. “It enables easy and portable deployment once the packaging is done.  I expect to see many platform technologies to take advantage of it.”