As today’s leading telecommunications companies build out their core networks to support high-value 5G mobile services, the key decisions they make at the outset regarding their service mesh architectures can greatly impact the level of 5G network functionality, flexibility, monitoring, and security their data centers can expect.
Service mesh is a cloud-native technology that provides traffic management, security, behavioral insights, and operational control over the network of microservices running within a Kubernetes cluster. The service mesh control plane is the means by which data centers manage one or more vendors and their respective 5G cloud-native network functions on the Kubernetes platform.
In this article, we’ll consider the pros and cons of leveraging four different service mesh patterns used to deploy cloud-native microservices on the Kubernetes cloud platforms, ultimately driving 5G core networks at telecom data centers. These four use cases include hosting all cloud-native network functions (CNF) from:
- A single vendor using a single service mesh on a single Kubernetes cluster
- Different vendors using a single service mesh on a single Kubernetes cluster
- Different vendors using multiple service meshes on a single Kubernetes cluster
- Different vendors using multiple service meshes on multiple Kubernetes clusters
Single Vendor/Single Mesh/Single Cluster
The shift from physical network functions (PNF) to virtual network functions (VNF) and now to the cloud-native Containers-as-a-Service (CaaS) environment has resulted in huge technological challenges for the telecommunications industry. Today, service mesh implementations, such as Istio, Linkerd, Consul, and Kuma provide a framework that gives data centers control—including security, observability, and traffic management—over the microservices in the CNF (Cloud-native network Functions) that comprise their 5G core network.
When the CNFs are all provided by a single 5G vendor, data centers typically could use a single mesh pattern to orchestrate their functions on a single Kubernetes cluster. Workload traffic can be organized in separate namespaces (Kubernetes isolation), with a single Ingress/Egress gateway.
The drawback is that the data center is locked into a particular vendor and cannot procure services with another that might be offering better prices or features. However, a plus is that the use of a single Kubernetes cluster results in a light compute resource utilization footprint.
Different Vendors/Single Mesh/Single Cluster
When the architecture supports different vendors at once, the customer has the flexibility to procure CNF services from multiple vendors. Using a single mesh control plane, multiple network functions can be monitored on a single dashboard. Since the vendors share access to the same Kubernetes cluster, this configuration also has a relatively light resource utilization footprint.
However, whenever vendors share the same Kubernetes cluster, they are essentially competing for network resources, such as CPUs, RAM, and disk capacity. Their apps are only separated by or limited to namespaces, and they must support whatever service mesh software and version the Kubernetes platform is running. Generally, having to share a single mesh and Kubernetes cluster gives rise to vendor concerns about organizational security of their software and how it will be deployed and used.
If a technical problem develops but it is unclear which vendor’s app is responsible, this could result in finger-pointing that leaves the data center team taking the lead on the troubleshooting required to restore the service.
Different Vendors/Multiple Mesh/Single Cluster
With multiple service meshes dedicated to different vendors on the same Kubernetes cluster, multi-tenancy isolation is created by the separate service mesh control planes. This isolation results in a greater degree of security. Each vendor has its own dedicated namespace, and—with separate Ingress/Egress Gateways—network traffic to and from the functions (microservices) travels along separate pathways.
While this service mesh pattern involves a more complex implementation, the data center gains by not being locked into the price, features, or services offered by any particular vendor. In terms of drawbacks, multiple service meshes are restricted to viewing their own respective vendor-specific data plane using their own graphical user interfaces (GUIs), and as a result, this service mesh pattern lacks a single point of monitoring and control.
For network operations monitoring, data centers can use dashboards, such as Kiali, Prometheus, Jaeger, and Grafana. This particular service mesh use case promotes a relatively light resource footprint, but the platform does consume additional resources needed to support multiple service meshes.
Different Vendors/Multiple Mesh/Multiple Clusters
The benefit to leveraging multiple Kubernetes clusters for multiple vendors is that each vendor can gain a greater degree of isolation and flexibility. Vendors benefit by gaining security isolation of their workloads due to isolated Kubernetes clusters running on separate physical or virtual machines. This set-up is also beneficial because it involves completely independent service mesh platforms. This multi-tenancy isolation gives vendors a greater sense of security, and they don’t have to compete with each other for data center resources.
Compared with the previous three service mesh use cases, the use of multiple Kubernetes clusters does result in the highest consumption of resources. Like in the third use case, multiple service meshes are restricted to viewing their own respective vendor-specific data plane, thereby lacking a single point of control.
Of these four featured use cases, the Single Vendor/Single Mesh/Single Cluster pattern is the simplest to structure and deploy, involves less operational complexity, consumes the least resources, and enables more centralized monitoring and control.
If, however, the data center team wants multi-vendor flexibility, they can put different vendors on either single or multiple Kubernetes clusters, managed by one or more instances of service mesh. The complexity increases the more vendors, Kubernetes clusters, and service mesh control planes they decide to implement within their 5G core network. Resource utilization also increases relative to the number of vendors and clusters involved.
Regardless of which of these four service mesh patterns is adopted, the strategy that data centers employ to structure their core network architecture and to deploy service mesh greatly impacts the benefits, efficiencies, challenges and limitations they ultimately experience.