Since the days of Samuel Morse, the pace of technological progress has been intrinsically linked to the amount of information that can be sent down a piece of wire. More information means better decisions, faster innovation, and increased convenience. Everybody loves a bit of extra bandwidth - from consumers to businesses to governments.
As telecommunications networks grew larger and the supply of bandwidth increased, network operators required ever more complex machines, created by businesses that were naturally protective of their inventions. Eventually, the world of telecommunications came to be dominated by expensive metal boxes full of proprietary technology.
But the birth of the Web in the 1990s blurred the line between telecommunications and IT equipment. Since then, the progress of general-purpose computing and advances in virtualization have gradually reduced the need to design advanced functions into hardware. Recent trends like Software-Defined Networking (SDN) and Network Function Virtualization (NFV) have ushered in a new world, in which the hardware is built from common parts in simple combinations; complex services are delivered in software, running on virtual machines.
New world order
In order for SDN and NFV to work, it is very important that all the elements of virtualized networks speak the common language, and follow the same standards. This at least partially explains why the networking landscape has gravitated towards collaborative open source development models. Disaggregation of software from hardware has resulted in a generation of successful open networking projects including OpenDaylight, Open Platform for NFV (OPNFV) and Open Network Automation Platform (ONAP).
All of the above are hosted by the Linux Foundation – a non-profit originally established to promote the development of Linux, the world’s favorite server operating system. From these 'humble' origins, the Foundation has grown into a massive hub for open source software development with more than 100 projects spanning AI, connected cars, smart grids and blockchain. It is also responsible for maintaining what is arguably the hottest software tool of the moment, Kubernetes.
Earlier this year, the Foundation brought six of its networking projects under a single banner, establishing Linux Foundation Networking (LF Networking). "We had all these projects working on specific pieces of technology that together form the whole stack – or at least the large pieces of the stack for NFV and next-generation networking,” Heather Kirksey, VP for networking community and ecosystem development at The Linux Foundation, told DCD. Before the merger, Kirksey served as director of OPNFV. “We were all working with each other anyway, and this would streamline our operations – a few companies were on the board of every single project. It made sense to get us even more closely aligned.”
Cloud-native Network Functions
We met Kirksey at the Open Networking Summit in Amsterdam, the event that brings together the people who make open source networking software, and the people who use it. The latest idea to emerge out of this community is Cloud-native Network Functions (CNFs) - the next generation Virtual Network Functions (VNFs) designed specifically for cloud environments, packaged inside application containers based on Kubernetes.
Virtual Network Functions (VNFs) are the building blocks of NFV, able to deliver services that, traditionally, used to rely on specialized hardware – examples include virtual switches, virtual load balancers and virtual firewalls. CNFs take the idea further, sticking individual functions into containers to be deployed in any private, hybrid or public cloud.
“We’re bringing the best of telecoms and the best of cloud technologies together. Containers, microservices, portability and ease of use are important for cloud. In telecoms, it’s high availability, scalability, and resiliency. These need to come together – and that’s the premise of the CNFs,” Arpit Joshipura, head of networking for The Linux Foundation, told DCD.
Application containers were created for cloud computing – hence, cloud-native. Kubernetes itself falls under the purview of another part of The Linux Foundation, the Cloud-Native Computing Foundation (CNCF), a community that is increasingly interested in collaborating with LF Networking.
“We started on this virtualization journey several years ago, looking at making everything programmable and software-defined,” Kirksey explained.
“We began virtualizing a lot of the capabilities in the network. We did a lot of great work, but we started seeing issues here and there – to be honest, a lot of our early VNFs were just existing hardware code put into a VM.
“Suddenly cloud-native comes on the scene, and there’s a lot of performance and efficiency gains that you can get from containerization, there’s a lot more density – more services per core. Now we are rethinking applications based on cloud-native design patterns.
"We can leverage a wider pool of developers. Meanwhile the cloud-native folks are looking at networking – but most application developers don’t find networking all that interesting. They just want a pipe to exist.
“With those trends of moving towards containerization and microservices, we started to think how cloud-native for NFV would look like.”
One of the defining features of containers is they can be scaled easily: in periods of peak demand, just add more copies of the service. Another benefit is portability, since containers package all of the app’s dependencies in the same environment, which can then be moved between any cloud provider. Just like VNFs, multiple CNFs can be strung together to create advanced services, something called ‘service function chaining’ in the telecommunications world. But CNFs also offer improved resiliency: when individual containers fail, Kubernetes’ auto-scaling features mean they will be replaced immediately.
The term ‘CNF’ is just a few months old, but it is catching on quickly: there’s a certain industry buzz here, a common understanding that this technology could simultaneously modernize and simplify the network. Thomas Nadeau, technical director of NFV at Red Hat who literally wrote the book on the subject, told DCD: “When this all becomes containerized, it is very easy to build the applications that can run in these [cloud] environments. You can almost imagine an app store situation, like in OpenShift and Kubernetes today - there’s a catalogue of CNFs, and you just pick them and launch them. If you want an update, they update themselves.
“There’s lower cost for everybody to get involved, and lower barriers to entry. It will bring in challengers and disruptors. I think you will see CNFs created by smaller firms and not just the ‘big three’ mobile operators.”
It is worth noting that, at this stage, CNFs are still a theoretical concept. First working examples of containerized functions will be seen in the upcoming release of ONAP codenamed ‘Casablanca’ and expected in 2019.
Waiting for 5G
Another interesting part of the Linux Foundation is Open Networking Foundation (ONS), an operator-led consortium that creates open source solutions for some of the more practical challenges of running networks at scale. Its flagship project is OpenFlow, a communications protocol that enables various SDN devices to interact with each other, widely regarded as the first ever SDN standard. A more recent, and perhaps more interesting, endeavor is CORD (Central Office Re-architected as a Datacenter) - a blueprint for transforming telecommunications facilities that were required by legacy networks into fully featured Edge data centers based on cloud architectures, used to deliver modern services like content caching and analytics.
During his keynote at ONS, Rajesh Gadiyar, VP for Data Center Group and CTO for Network Platforms Group at Intel, said there were 20,000 central offices in the US alone - that's 20,000 potential data centers.
“Central offices, regional offices, distributed COs, base stations, stadiums – all of these locations are going to have compute and storage, becoming the virtual Edge. That’s where the servers will go, that footprint will go up significantly,” Joshipura said. “The real estate they [network operators] already have will start looking like data centers.”
Service providers like AT&T, SK Telecom, Verizon, China Unicom and NTT Communications are already supporting CORD. Ideologically aligned hardware designers of the Open Compute Project are also showing a lot of interest - OCP's Telco Project, an effort to design a rack architecture that satisfies additional environmental and physical requirements of the telecommunications industry, actually predates CORD.
Despite their well-advertised benefits, OCP-compliant servers might never become truly popular among colocation customers - but they could offer a perfect fit for the scale and cost requirements of network operators.
Many of these technologies are waiting for the perfect use case that’s going to put them to the test – 5G, the fifth generation of wireless networks. When the mobile industry was switching from 3G to 4G, the open source telecommunications stack was still in its infancy, and Kubernetes simply didn’t exist. With 5G networks, we will complete the virtualization of mobile connectivity, and this time, the tools are ready.
According to Joshipura, 5G will be delivered using distributed networks of data centers, with massive facilities at the core and smaller sites at the edge. Resources will be pooled using cloud architectures – for example, while OpenStack has struggled in some markets, it has proven a massive hit with the telecoms crowd, and is expected to serve the industry well into the future.
“I would say 5G mandates open source automation, and here’s why: 5G has 100x more bandwidth, there will be 1,000x more devices - the scale is just astronomical. You just cannot provision services manually. That’s why ONAP is getting so much attention – because that’s your automation platform,” Joshipura told DCD.
Then there’s the question of cost: during her presentation at ONS, Angela Singhal Whiteford from Affirmed Networks estimated that open source tools can lower the OpEx of a 5G network by as much as 90 percent. She explained that all of this abundance of bandwidth will need to be ‘sliced’ – a single 5G network could have thousands of ‘slices,’ all configured differently and delivering different services, from enterprise connectivity to industrial IoT. The speed and ease of configuration are key to deploying this many network segments: right now, a new service takes 3 to 12 months to deploy. By completely virtualizing the network, new services can be deployed in minutes.
“By moving to open source technology, we have a standardized way of network monitoring and troubleshooting,” Whiteford said. “Think about the operational complexity of monitoring and troubleshooting hundreds and thousands of network slices – without a standardized way to do that, there’s no way you can profitably deliver those services.”
A network engineer, Nadeau adds this perspective: “I’ve long thought that the whole mobile thing was way over-complicated. If you look at the architecture, they have 12-14 moving parts to basically set up a wireless network. The good news is that you used to have to deploy 12 boxes to do those functions, and today you can deploy maybe two servers, and one box that does the radio control. That’s really one of the important parts of 5G – not only will there be higher frequencies and more bandwidth, but also the cost for the operator will go down.”
The Linux Foundation says its corporate members represent 65-70 percent of the world’s mobile subscribers. With this many projects at all levels of the networking stack, the organization looks well placed for an important role in the next evolutionary leap of the world’s networks.
This story originally appeared in the October/November issue of DCD Magazine. Subscribe for free today: