Open source Kubernetes code will help partnership think inside the box to make OpenStack deployment simpler
CoreOS is working with Intel to merge Kubernetes container management capabilities with OpenStack to create a single software-defined infrastructure (SDI) stack, based on open source components. By moving the stack upstream as a container, it will make SDI implementation much easier.
The development is part of the CoreOS commitment to open source, by moving the code upstream in the ecosystem, anyone can use it as a basis and accelerate the development of even larger cloud projects.
Control plane APIs
OpenStack is basically a set of control plane APIs that allows users to deploy Infrastructure-as-a-Service (IaaS) virtual machines and other software defined infrastructure environments within any environment. By adding Kubernetes, Intel and CoreOS hope to add the deployment and lifeycle management layer which they feel is a sticking point for some deployments of OpenStack.
Effectively OpenStack would become a set of web and system applications packaged in containers and deployed using the Kubernetes management platform. CoreOS also plans to offer the stack as an option in Tectonic, its commercial distribution of Google’s Infrastructure For Everyone Else (GIFEE). The package would include Kubernetes, CoreOS Linux, and additional platform services such as Quay by CoreOS and OpenStack.
Alex Polvi, CEO of CoreOS, said, “We believe that once it is easier to deploy and manage OpenStack, we’ll see rapid acceleration in adoption, quality, and development of the project.”
He added that running OpenStack as an application on Tectonic will turn a data center into a single platform architecture. Managing and deploying OpenStack would be like running any other application on Kubernetes, providing enterprises with the benefits of both containers and IaaS virtualisation.
Jason Waxman, vice president and general manager of Intel’s Cloud Platforms Group, said, “Both the Kubernetes and OpenStack communities can benefit greatly by having an orchestration layer to manage workloads across VMs and containers.”