As data center applications become more resource-intensive and fluid, network managers must up their infrastructure game.
The data center environment is constantly changing, which should surprise absolutely nobody. But some changes are more profound than others, and their long-term effects more disruptive.
To be clear, data centers — whether hyperscale, global-scale, multi-tenant or enterprise — aren’t the only ones affected by such fundamental changes. Everyone in the ecosystem must adapt, from designers, integrators and installers to OEM and infrastructure partners.
We are witnessing the next great migration in speed, with the largest operators now transitioning to 400G applications and already planning the jump to 800G. So, what makes this latest leap significant?
For one thing, the move to 400G then 800G and eventually 1.6T and 3.2T officially marks the beginning of the octal era, which brings with it some fundamental changes that will affect everyone.
But first, a bit of context.
What’s driving changes in data center infrastructure?
Increases in global data consumption and resource-intensive applications like big data, IoT, AI and machine-learning are driving the need for more capacity and reduced latency within the data center.
At the switch level, faster, higher-capacity ASICs make this possible. The challenge for data center managers is how to provision more ports at higher data rates and higher optical lane counts.
Among other things, this requires thoughtful scaling with more flexible deployment options. Of course, all of this is happening in the context of a new reality that is forcing data centers to accomplish more with fewer resources (both physical and fiscal).
While data center network managers are ultimately responsible for ensuring their infrastructure is up to the task, their partners (installers, integrators, system designers and OEMs) all have a substantial amount of skin in the game. The value of the physical layer infrastructure is largely dependent on how easy it is to deploy, reconfigure, manage and scale.
Identifying the criteria for a flexible, future-ready fiber platform
Several years ago, shortly after the launch of CommScope’s high-speed migration platform, we began focusing on the next generation fiber platform.
So, we asked our customers and partners: “Knowing what you know now — about network design, migration and installation challenges and application requirements —how would you design your next-generation fiber platform?”
Their answers echoed the same themes — easier, more efficient migration to higher speeds, ultra-low-loss optical performance, faster deployment, more flexible design options.
In synthesizing the input, and adding lessons learned from 40+ years of network design experience, we identified several critical design requirements necessary for addressing the changes affecting both our data center customers and their design, installation and integration partners:
- The need for application-based building blocks
- Flexibility in distributing increased switch capacity
- Faster, simpler deployment and change management.
Application-based building blocks
As a rule, application support is limited by the maximum number of I/O ports on the front of the switch. For a 1RU switch, capacity is currently limited to 32 QSFP/QSFP-DD/OSFP ports. The key to maximizing port efficiency lies in your ability to makes the best use of the switch capacity.
Traditional four-lane quad designs provided steady migration to 50G, 100G and 200G. But at 400G and above, the 12- and 24-fiber configurations used to support quad-based applications become less efficient, leaving significant capacity stranded at the switch port. This is where octal technology comes into play.
Beginning with 400G, eight-lane octal technology and 16-fiber MPO breakouts become the most efficient multi-pair building block for trunk applications.
Moving from quad-based deployments to octal configurations doubles the number of breakouts, enabling network managers to eliminate some switch layers.
Moreover, today’s applications are being designed for 16-fiber cabling. So, supporting 400G and higher applications with 16-fiber technology allows data centers to maximize switch capacity.
This 16f design — including matching transceivers, trunk/array cables and distribution modules becomes the common building block enabling data centers to progress from 400G to 800G, 1.6T and beyond.
Yet, not every data center is ready to move away from their legacy 12- and 24-fiber deployments. They must also be able to support and manage applications without wasting fibers or losing port counts. Therefore, efficient application-based building blocks for 8f, 12f and 24f configurations are needed, as well.
Another key requirement is for a more flexible design to enable data center managers and their design partners to quickly redistribute fiber capacity at the patch panel and adapt their networks to support changes in resource allocation.
One way to achieve this is to develop built-in modularity in the panel components that enable alignment between point-of-delivery (POD) and network design architectures.
In a traditional fiber platform design, components such as modules, cassettes and adapter packs are panel-specific. As a result, changing components that have different configurations also involves swapping out the panel as well.
The most obvious impact of this limitation is the extra time and cost to deploy both new components and new panels. At the same time, data center customers must also contend with additional product ordering and inventory costs.
In contrast, a design in which all panel components are essentially interchangeable and designed to fit in a single, common panel would enable designers and installers to quickly reconfigure and deploy fiber capacity in the least possible time and with the lowest cost. So too, it would enable data center customers to streamline their infrastructure inventory and its associated costs.
Simplifying and accelerating fiber deployment and management
The final key criteria defined by CommScope’s research and design efforts is the need to simplify and accelerate the routine tasks involved in deploying, upgrading and managing the fiber infrastructure.
While panel and blade designs have offered incremental advances in functionality and design over the years, there is room for significant improvement.
Additionally, the issue of polarity management also deserves mention. As fiber deployments grow more complex, ensuring the transmit and receive paths remain aligned throughout the link become more difficult.
In the worst case, ensuring polarity requires installers to flip modules or cable assemblies. Mistakes may not be identified until the link has been deployed and resolving the issue adds time.
Enter the Propel solution
The result of CommScope’s intelligence-gathering and subsequent design and engineering efforts is the Propel solution, a new end-to-end, high-speed, modular fiber platform.
Rigorously designed around the key criterion of application-based building blocks, design flexibility and deployment speed, the Propel solution is the first global fiber platform to incorporate native 16-fiber technology while supporting 8f, 12f and 24f applications.
As a result, it provides a single platform that supports multiple network generations. It is also optimized with ultra-low-loss optical performance and designed to be a greener, more sustainable solution.
Learn how CommScope can help data center operators maximize their existing infrastructure investments while preparing for future applications with the Propel solution.
More from CommScope
Examining emerging data center applications and the infrastructure needed to run them
Data center fiber platforms not only need to handle ever-greater bandwidth demands, but seamless upgrade paths too. CommScope’s new Propel platform is designed to do just that
Sponsored Hyperconnectivity hits the enterprise
The demands of hyperconnectivity - the need for ultra-responsive network connections - will place onerous new demands on enterprises and data center operators, warns CommScope's Jason Reasor