Archived Content

The following content is from an older version of this website, and may not display correctly.

Microsoft’s reduced and refactored Nano Server offers lots of performance improvements for cloud and DevOps, and represents the culmination of the operating systems progress towards a different approach to administration. 

The new, tiny Nano Server, announced this week, promises significant improvements – a 93 percent lower VHD size, 92 percent fewer critical bulletins, and 80 percent fewer reboots. It also caps several years in which Microsoft has redirecting its system administration tools, away from local GUIs to the PowerShell command line tooling.

All of this is a big change, and one that will make system administrators lives easier, letting them focus on improving services rather than maintaining infrastructure.

Mike Neil Microsoft Windows general manager
Mike Neil, Microsoft Windows Server general manager – Microsoft

Docker as well 

Nano Server wasn’t Microsoft’s only announcement; it’s also increased its focus on containers, forking Docker to add Windows support and announcing a Hyper-V hosted container model. 

We spoke to Mike Neil, general manager for Windows Server, about the company’s announcements and what they mean for the modern data center – and for Microsoft’s approach to delivering software-defined infrastructure. 

The new version includes some very signficant changes, Neil explained: “As we did the refactoring work in Nano server we went back historically and looked at what caused reboots, what are the dependencies, what are the pieces of functionality for a server that are frankly not paramount capabilities? A lot of refactoring was driven by how to reinvent that. There was a trade-off, to make sure it can run apps, provide functionality, and provide the necessary infrastructure to build the cloud.”

In practise, this has meant removing the entire GUI stack as well as ending 32-bit support. You won’t be able to log on locally, or use remote desktop: everything must be managed through remote PowerShell and its Desired State Configuration tools, as well as a set of new web-based management features.

Looking to the data center

What’s perhaps most interesting about this set of announcements is its focus on the next generation data center; and on the tools and technologies that will be needed to deliver a software-defined infrastructure for scalable applications. Microsoft describes Nano Server as a platform for what it’s calling “modern” applications, built using DevOps techniques and using scalable micro-service focused platforms like node.js.

It’s certainly scalable. Neil told us that using Hyper-V with Nano Server it’s possible to run more than 1000 virtual machine instances on a single host. High density applications like this work well as a way of delivering containerized microservices, with Docker Windows containers wrapping applications and services, and Nano Sever being used as a container host.

Scaling out Nano Server images should allow rapid re-configuration, with tools like System Center Orchestration working in conjunction with Docker-focused application orchestration tools like Kubernetes.

Trusting automation

That’s where refactoring Windows Server comes into play. A GUI management platform is at best a distraction and at worst a hindrance when working with thousands of servers. Instead of the individual touch we’ve got so used to with Windows Servers, we need to use automation to work with cloud-scale systems. Microsoft is working with DevOps vendors like Chef to deliver automation tooling, and we won’t be surprised to see Chef’s recently announced Delivery workflow tools being used to handle Nano Server configurations.

In operating system research, there’s a concept called the Library OS, where a minimal OS platform can be tuned with the specific services and features needed to deliver an application – reducing the need for updates, and keeping the attack surface as small as possible. Nano Server isn’t that OS, but it’s a long way down the road to delivering it. Using tools like PowerShell DSC and Chef, it’s going to be possible to programmatically construct a server description that can build on Nano Server as a core, and add the elements needed to support a service as required – and remove them when that service is no longer required. According to Neil this approach will allow Nano Server to scale up its features significantly, “You have the option of going back to Server Core, but as you grow in footprint and in size, by adding this you take the impact of doing that.”

Nano Server isn’t the only component of this data center-scale approach to infrastructure. Microsoft’s new Hyper-V Containers are a blend of the familiar hypervisor with the new application abstractions that come with tools like Docker. As Neil explains it, “We use the hypervisor to provide an isolation mechanism, the tried and tested use of virtualization based in a hardware root of trust. It’s very much a core function of the hypervisor to provide that isolation. We then provide higher level abstraction for network and file systems within that boundary, blending the two together.” That way you get the benefits of a highly isolated hardware solution, along with the higher level abstraction of containers, with more shared resources and less associated overhead. The Docker-like Windows Server container uses a shared kernel instance, so updating a container host requires taking all its instances down. With Hyper-V containers you can upgrade containers individually, without affecting the other services running on a host – more like a traditional virtualized environment.

Only for DevOps devotees

Flexibility is key to delivering a modern data center, and by using the combination of Nano Server and its new container technology Microsoft is making a big shift away from its previous monolithic server model to one that’s more aligned with the way we deliver cloud-scale services. That does mean that Nano Server won’t be for everyone. You’re going to have to have made the shift to a DevOps model, and to using cloud-scale data center infrastructure practices.

It’s not surprising that Microsoft described Nano Server as ideal for use with its CPS “cloud-in-a-box” rack-scale systems. As Neil notes, “With CPS as our reference architecture, all our technologies for our cloud stack are integrated together as a fully integrated solution. Nano Server would play into future version, we’d have more densities and we’d have more versatility. There’s a lot of goodness we’ll get from using Nano server as part of our solution.”