A little more than four years ago, in June 2014, Google open-sourced Kubernetes, the container orchestration platform based on software that manages the hundreds of thousands of servers that run Google.
Kubernetes not only beat Apache Mesos and Docker SWARM in the container orchestration race, it has become arguably the hottest technology to emerge since the Linux operating system that commoditized enterprise UNIX operating systems and became the ubiquitous platform for everything from IoT to scale-out cloud computing. It’s no longer if Kubernetes, but how rapidly it will become the dominant way for enterprises to develop and deploy applications.
Let’s take a look at four vectors that I think will really define the Kubernetes footprint in the data center for years to come:
Bare Metal (Not VMs) Will be Recognized as the Best Place to Run Containers
Hardware virtualization was one of the great data center revolutions of the last 30 years, and there’s no disputing the riches that VMware has brought to investors. But when Pat Gelsinger tells you that virtual machines are the “best place to run Kubernetes,” that’s just not an accurate assessment. Legacy infrastructure was not built for the way containers use compute, storage and network.
And the benefits of resource consolidation, workload isolation and operational simplicity make bare-metal the clear platform of choice for running Kubernetes and containers. The opportunity to save money on virtual machine licensing fees will be a strong economic motivator for bare metal, and the fact that Kubernetes and the rest of the cloud-native frameworks in its orbit (Istio, et al) are designed for bare metal will really drive the trend.
IBM Will Gain Relevance in the Cloud with Red Hat / OpenShift
IBM isn’t the only heavyweight that’s struggled to get lift off in the race to compete with AWS and Azure for cloud market share. The $34bn acquisition of Red Hat has a lot riding on the strength of OpenShift to get IBM a look from any enterprise making a serious investment in containers and Kubernetes, with an eye towards hybrid cloud.
While containers enable faster application development cycles and fit into most Fortune 500 efforts to modernize, there’s a lot more to running containers in production than merely the orchestration layer. OpenShift will get IBM a seat at the table in getting its cloud business taken seriously, and the education and work associated with the shift to containers will be a major boon for IBM professional services.
Kubernetes Has an Opportunity to Creep into the JVM / Java EE Stack
Large scale container deployments have generally been defined by green field applications - net new systems, built as microservices, designed to run cloud-native. IDC has predicted that 90 percent of all applications will feature microservices architectures by 2022, so Kubernetes footprint there is only going to grow.
But a parallel opportunity for Kubernetes will be in the emergent cloud native Java conversations that have been budding since Eclipse Foundation took governance of Java EE (now known as Jakarta EE) from Oracle -- and is focused on making this enterprise Java run-time stack more compatible with microservices and cloud-native use cases.
The Eclipse Foundation has already made some overtures that Kubernetes integration with Jakarta EE could be on the horizon. What this means for Kubernetes is that the roughly 10 million Java developers worldwide are going to be led in Kubernetes directions.
Networking and Storage Hardware With “Kubernetes Inside”
Ask any data center operator what it was like to get their containerized workloads into production. Containers can break traditional enterprise networking and storage models, depending on vendor selection and implementation details. To date, vendors have not stepped forward with full-stack solutions that include CNI and FlexVolume storage models with a “single throat to choke” support model.
Networking models need to offer Layer 2 plug-and-play compatibility - customers don’t always want overlay networks. Storage models need to support hyperconverged NVMe in the box and provide support for external arrays via iSCSI or NFS. Vendors have bits and pieces, or they’ve attempted to shoe-horn Kubernetes support into existing virtualization models. But it’s clear that bare-metal, native Kubernetes support is the way to go and that customers want simple Kubernetes-native networking and storage solutions built in.