The impact of 5G and the services it enables, such as IoT and autonomous systems, require a smarter approach to IT architecture. It’s imperative that industry gets it correct from the start.
While many consumers will experience 5G through smart phones and gaming, the same platform will be instrumental for automation, performance, and information. This will be true across many segments, including manufacturing, healthcare, energy and utilities. The sweeping adoption of 5G bandwidth and connectivity is also creating a radical drive for more aggressive digitization from start to finish, with telecom operators recognizing the need to push more of their data center capabilities to the edge to externalize innovation and accelerate new services.
Significantly, the 5G era is creating so much demand for massive infrastructure at the edge that it’s expected to eclipse the size of centralized cloud infrastructure. While future predictions are not always reliable, consider this: the global data center has been estimated to reach $251B by 2026 (Arizton), yet worldwide spending on edge computing is expected to reach $250B in 2024, two years earlier (IDC).
This massive shift has revealed deficiencies in traditional infrastructure growth models. It has become increasingly obvious that building and deploying IT hardware for the requirements of 5G edge computing rapidly, and at scale, requires a new architectural paradigm designed specifically for distributed models and their unique workload requirements at the edge. To ensure consistent service levels, the architecture must be able to support integrated and efficient management, and mobility of data between the edge and core.
Multi-tier architecture requires homogeneity
The driving attributes of 5G services are extreme demands for low latency, combined with local processing and storage. Emerging 5G services promise sub-millisecond response times that are critical for real-time applications ranging from drones to high-end gaming. Similarly, mmWave 5G connections have 10X the bandwidth of prior generation LTE networks, opening up vast capabilities for automation based on the Internet of Things (IoT).
To meet the need for low latency and high bandwidth, the emerging IT architecture stratifies into layers from the edge device to the core. Each layer has distinct characteristics, and together they operate as a whole:
- The edge device, such as a smartphone, an industrial sensor, or an autonomous vehicle.
- The “near edge” which is potentially a very small form factor compute and storage device to pre-process and route data from the edge device. This layer is designed to be integrated into a 5G node such as a streetlamp or macro tower.
- Edge “micro” data centers to support local services, located in proximity to the 5G node such as in racks at the base of a cell tower.
- Modular data centers that provide regional compute, storage and routing across a geographical region, which may be located in containers or existing office buildings.
- At the center, you have the core data center, often a massive hyperscale facility operated by the very largest cloud service providers.
Each layer has different data latency, compute and storage requirements. They can have radically different form factors, access points, and operational models. Yet, each element of these unique implementations must be able to work together and function as unified, always-on infrastructure. If each layer is designed independently, it will prove extraordinarily costly to deploy and maintain.
A better model is to pursue a “homogeneous” computing infrastructure that builds on a common set of standards, such as the models developed by the Open Compute Project (OCP). This is the only way to scale quickly and cost-efficiently. It not only improves the efficiency and predictability of data flow across the architecture, it creates the possibility for seamless management in very remote “hands-free” environments of near-edge and micro layers of architecture.
Fortunately, we now have an industry model that makes available all the components of a homogenous IT infrastructure for 5G.
Decommissioned hyperscale facilities provide scale
Hyperscale compute innovation, based on OCP principles, stands poised to underpin and deliver against these demands. In the past, its technology has not been widely accessible by broad sets of customers. That has changed with new circular economic models that deliver integrated and recertified hyperscale technology on a large scale across global markets. Not only does this reduce the time required to deploy 5G infrastructure, it also lays the groundwork for lowering total cost of ownership (TCO) that builds in sustainability.
Best-in-class hyperscale servers and components can also have second lives in near-edge, micro and modular data centers. The same hardware that has been proven in the most demanding core data centers can operate in more highly distributed environments, as the backbone of a best-in-class, power- and cost-efficient infrastructure. Likewise, the streamlined modular hands-free maintenance and the setup support models deployed in the hyper-scale environments are very relevant to the more broadly distributed edge environments.
Even better, a circular economy model that liberates a massive stream of hyperscale-proven hardware for edge markets will help achieve maximum density at the edge, quickly and at scale. Service providers who leverage this homogenous architecture approach will be better positioned to deliver and grow 5G services to their customers across a variety of markets, and do so at a significant economic advantage.
TCO advantages help 5G companies scale quicker
This model of bringing hyperscale technology into other data center environments is enabling rack scale solutions capable of delivering as much as a 50% total cost of ownership advantage over conventional solutions. In edge layers with smaller capacity compute and storage we see even greater potential, with a TCO as little as a quarter or a tenth of the cost of traditional CDN infrastructure.
In fact, we can anticipate increasing TCO advantages across the architecture. Data centers near the edge will use inherently smaller, more confined, less conventional facilities and formats. Being able to deliver the maximum density in the most power- and cooling-efficient manner is going to be critical, which makes the homogeneous extension of OCP from core to edge a foundational piece of the puzzle.
Using a “building-blocks” approach—with small, medium and large IT hardware formats— allows data center architects to look beyond performance metrics to “outcomes per dollar,” a model that's already established by public cloud providers. Carriers building 5G architectures can put these building blocks together in a variety of ways based on their workloads, and add to or evolve those systems as market demands change. This extends their ability to practice integrated and efficient IT management, and do so responsibly under ever more pressing demands to get more from their IT dollars. It also ensures built-in mobility of data between the edge and core, while deploying rapidly across distributed domains.
A building block perspective that focuses on outcomes per dollar is a simpler way to scale up than customizing all the various individual layers that are needed in a 5G infrastructure. The service provider expects the building blocks to work together, wherever they are, and vendors bear the burden of making sure that's true.
This approach also simplifies the strategy for sustainability.
Server reuse creates a baseline for sustainability
As the edge infrastructure scales up to and beyond core data centers, it is our imperative as professionals in the global technology industry to also pay attention to what the scale of 5G infrastructure means for the planet. The sustainability conversation cannot be ignored. Reuse of hyperscale equipment through a circular economic model gives those building 5G edge infrastructure new ways to accelerate the path to their carbon reduction goals, while complementing other data center initiatives, including those from the OCP and investment in sustainable resources.
To start, companies need to commit to renewable energy and greener grids that will reduce the Scope 2 emissions related to the cost of running the hardware. But more importantly, they also have to think about sustainability from a materials manufacturing perspective. Manufacturing IT equipment is as much as three quarters or more of the total carbon impact of an IT infrastructure. A circular economy approach is a direct way to defer a tremendous amount of this new manufacturing; and extending hardware lifetime is a direct contribution to reducing Scope 3 emissions from material sourcing while laying the foundation for new “scope 4” avoided emissions.
Build the right 5G infrastructure, right now
Multi-layered edge computing is the emerging infrastructure to deliver the promise of 5G. We have the opportunity to do it right from the start, and not just reuse models we've always followed in the past. Today, let’s get the industry on the right trajectory from an economic and sustainability footprint basis, as shareholder, consumers and other stakeholders increase the pressure to get to a carbon zero and carbon negative economy.