The Open Compute Project, created to generate cost-effective generic hardware for large users, clearly now extends to cover all aspects of the modern software-defined data center, to judge by the opening announements at the organisation’s Summit in San Jose, California.
A packed first day keynote covered networking, storage and compute – along with software to quickly deploy open data centers. Facebook showed its previously announced open network switch, with hardware available in the first half of 2015, and showed the FBOSS open switching system, which is now available on Github. Other software emerged from Cumulus and Big Switch.
Other hardware announcements included a new line of cloud scale servers from HP designed to take advantage of the software-defined nature of the cloud. Instead of including built-in lights out management hardware, the new Cloudline servers take advantage of the inherent resiliency of cloud software – and of its multi-layered fabric approach to management.
Meanwhile, Intel unveiled a new Xeon system on a chip, intended to power small single socket cloud servers. Powering part of Facebook’s new Yosemite cloud compute platform, the Xeon D-5100 is the chipset on the Yosemite’s Monolake processor boards. Cloud compute comes in many forms, and one compelling use case is high performance computing, While much of the HPC world is focused on GPU computing, Intel’s Xeon Phi builds on its 60 core x86 model, and is now available in an Open Compute compatible board, ready for use as a component in a large scale cloud compute fabric.
And former Open Compute executive director Cole Crawford unveiled his new project, Vapor IO, which manages hardware and reconfigures racks into rinks around a heat extracting chimney.
Easy tooling for software
Hardware needs software, and the Open Compute Project has its software elements. Mark Shuttleworth, founder and lead designer at Canonical, demonstrated what could become a key technology for self-configuring private and hybrid clouds, tooling that can identify the hardware in a data center and quickly configure it as a cloud. Quickly pulling together a group of servers at a University of Texas data center, Shuttleworth was able to quickly deploy 70 servers with OpenStack using Canonical’s Autopilot.
Part of the tooling is a service Shuttleworth calls Metal as a Service, describing it as “the complete automation of the physical layer in a data center.” Cloud services abstract away from the hardware, but they depend on it. MaaS brings back that linkage, allowing cloud software to quickly detect the servers added to a rack, and quickly deploy the appropriate software using familiar configuration tools – including Chef (using its new Chef Knife tools).
With support for more than just Intel processors, MaaS is part of the tooling needed to deliver a software-defined data center, with support for managing host OSes, network hardware (including IP address management in the next release) across both OCP and proprietary hardware.
Automating hardware deployment makes it easier to orchestrate service deployments, using tools like Canonical’s Juju. Treating software, like hardware, as composable parts simplifies the deployment of the elements of a large application, and there’s the additional prospect of integrating with containers using Google’s Kubernetes inside Juju.
“It’s a time of profound reinvention and radical change,” Shuttleworth said. That’s true of much more than Canonical and Ubuntu, it’s the whole data center model that’s changing to build the hyperscale cloud – on premises and in the public internet. The OCP is part of that change, and it’s good to see so many ostensible competitors sharing information and, more importantly, donating their intellectual property.