The cloud is broadening its footprint as it moves from a largely centralized model to a growing edge cloud presence. This not only affects the geographical distribution of data centers; it is putting new demands on IP networks and data center interconnection for secure, low latency, and scalable bandwidth that can meet the highly dynamic needs of the emerging metaverse.
If the opening phase of the cloud was largely about delivering asymmetrical services like streaming video to consumers, the next phase will see more symmetrical patterns. This is being driven in part by the digital transformation of enterprises, who are accelerating the adoption of edge clouds, especially for latency-sensitive automation use cases. Hybrid work models, which seem to be persisting post-pandemic, digital sovereignty, and the rapid growth of IoT applications are also contributing.
To meet the demand for on-premises, colocation, and edge cloud delivery options, the data center interconnection market will need to transform itself. Equinix and TeleGeography forecast a CAGR of 44 percent for global data center interconnection. Further, by 2024, interconnection bandwidth is predicted to be 15x larger than internet bandwidth. Data center operators with data centers in multiple locations will require different kinds of interconnections, such as topologies that feature multi-site connections and ring and mesh topologies, especially in metro areas.
Bringing cloud services closer to the user improves latency and speed. But it also leads to greater complexity when scaling. Operators managing tens of data centers today may be running hundreds of edge data centers tomorrow. As cloud services, applications, and workloads grow and diversify, data center operators will need to automate their operations to dynamically manage interconnectivity between their data center fabrics, edge clouds, and the wide area network.
One of the lessons of the pandemic for cloud application service providers was how quickly demand patterns can shift geographically. If they previously had treated data center interconnect as relatively static and predictable point-to-point links, the overnight shift to stay-at-home work taught them otherwise. As they look ahead to the emerging metaverse, they are understanding that their data center fabrics and wide area network interconnections need to become more flexible and consumable. Short-lived applications and movable workloads will create a highly dynamic environment that will challenge existing network fabrics and operations.
From the perspective of these cloud applications, diverse network resources need to present as a single and consumable fabric with sophisticated telemetry enabling operators to assure that dynamic service levels are being met. As the use of cloud application scales, it is not just virtual compute and storage resources that must respond, but network services as well. Integration between cloud management systems and the associated new tools will be needed to enable rapid provisioning of network resources that support the deluge of emerging cloud applications. Automated closed-loop assurance will be essential to ensure contracted edge cloud application service levels are maintained, especially for business- and mission-critical use cases.
This demands similar changes to the way networks are managed that we have seen in webscale operations with continuous integration (CI) and continuous deployment (CD) using agile development operations (DevOps) methodologies. Network operations, or NetOps, will need to be closely connected to DevOps. Like DevOps, it will need to rely on automation in the development of operations pipelines, including automated testing using digital twin technologies to simulate the live network environment before release. Just as DevOps required overcoming organizational silos, NetOps will require separate teams such as IP routing and optical transport to work more closely, if they are to be sufficiently responsive to cloud application demands.
For the network operations teams to be able to manage this kind of software-driven environment, they will require new skill sets such as programming and software development. One of the ways to ease this transition is by making the various layers of the virtual data center fabric programmable. Using high-level, low code, intent-based languages, a network engineer can describe in traditional terms the kind of performance characteristics required and the network automatically translates those intents into actual configurations.
Secure edge intelligence
The interconnection of diverse resources within and between data centers will require operators to architect their IP networks with a highly scalable and simplified core, putting intelligence and security at the edge and peering points. Internet exchanges that deploy these capabilities can even market themselves in this new world as interconnection providers offering service-level agreements (SLAs) for both service and security.
With the growth of private 5G enterprise networks for Industry 4.0 applications, for instance, there is a demand for ultra-low latency for front-, mid-, and backhaul connections. Serving both the enterprise on-premises edge cloud and the 5G backhaul requirements could be a lucrative market for interconnection providers in the future. The possibility also exists for layering on further security services, IoT device management, machine learning analytics, and other value-added enterprise applications on top of the interconnectivity and local edge cloud.
To take advantage of the data interconnection opportunity and thrive in the evolving metaverse, data center providers and network operators need the ability to interconnect data centers at scale with multiple service providers, cloud providers, and internet providers. Building a scalable, secure, and programmable cloud interconnection platform will enable them to serve their customers with the kind of consumable network fabric that tomorrow’s applications will demand.