With the approval of the IEEE 802.3ba reference standard in May 2010 – the first-ever Ethernet networking standard that was for two different network speeds – the doors were open for standardized implementations of 40 Gigabit Ethernet (40GbE) and 100GbE networking. With applications ranging from top-of-rack switching to backhaul and data center interconnect, it would seem the stage had been set for a significant leap in performance from the 10GbE infrastructures that were themselves only recently deployed.

While the demand for greater performance than 10GbE was clear, due significantly to the move towards cloud computing and the need to provide high-speed connectivity between disparate resources, it seems that 40GbE has been adopted rather slowly. Part of the reason is that as 40GbE began to hit its stride, 25/50GbE appeared to muddy the waters.

ethernet fast lane tall
– DCD / Jess Parker

25/50GbE chips are here

Despite the fact that formal adoption of the 25/50GbE IEEE standard is not likely to happen until later in 2016, by mid-2015 chips and products supporting the technology began to appear from vendors such as Broadcom, with its high-density 25/100 Gigabit Ethernet StrataXGS Tomahawk Ethernet switch series, and Mellanox with its 25/100 Gigabit open Ethernet-based switch.

With the performance capabilities of the most recent generations of Intel processors allowing data to be moved faster than 10Gbps, a technology that allows incremental growth in networking performance without additional networking infrastructure looks like a good idea. Simply adding additional 10GbE ports increases the number of required switches and the associated costs. 25GbE also has the advantage of using a single lane, the same as 10GbE connection with only a single set of appropriate silicon. 40GbE aggregates four 10GbE lanes requiring significant additional hardware costs. Early adopters were seeing costs at 1.5 times the cost of 10GbE for 25GbE performance. As production increases and standardization is achieved, the cost is expected to be the same as 10GbE for 2.5 times the performance.

The lane aggregation technique, as applied to the 25GbE technology, simplifies the introduction of faster network speed, as the 50GbE is simply two lanes and the 100GbE four lanes, which matched the specifications developed for the exiting 100GbE IEEE standard. The ability to scale the technology should be effective in reducing total deployment costs from end to end as you are effectively using the same technology – from top-of-rack, through data center interconnect, to appropriate cloud resources.

What we started to see in 2015 was the release of switches that were capable of handling both the current standards and proposed standards for high-performance networking. Switch hardware with out-of-the-box support for 10/25/40/50/100GbE networking became available from end-to-end solution vendors including Dell and Cisco, as well as providers of specialized networking equipment such as Juniper and Arista.

25GbE is supported by the 25 Gigabit Ethernet Consortium, which, along with hardware vendors that make the silicon and switches, also includes Google and Microsoft, which make use of the technology in their own data centers. The goal of the group is to promote widespread acceptance of the 25GbE and 50GbE technology.

At present, 100GbE is primarily used for aggregation and interconnection, but with the likely release of PCI Express 4.0 in 2017, with a maximum bit rate of 16 Gigatransfers per second (GT/s), it will become possible for individual servers to require dedicated 100GbE connections. While not a guarantee of future-proofing, top-of-rack switches that can support server connections at 100GbE will be a prudent choice.

OCP muscles in

Even the OCP (Open Compute Project) is getting in on the act as the 40GbE Wedge switches that Facebook introduced make use of the Broadcom Trident II silicon, which is capable of supporting all of the previously discussed standards. Originally with 16x40GbE ports, OCP’s updated Six Pack configuration is a 128x40GbE setup that uses 10G channels that can be upgraded to 25GbE to form the basis for the 100GbE interface.

Just as 40GbE began to hit its stride, 25GbE appeared to muddy the waters

Facebook still hasn’t reached its goal of adopting the Wedge top-of-rack switch design throughout its data center enterprise, but it does have thousands of Wedge switches in place. As Facebook develops and deploys the 100GbE versions of the Wedge, it will have the advantage of directly comparable performance data to evaluate the efficiency of higher-performance networking technologies.

With plans to use 400GB optical connections, Facebook could get a start on the next generation of networking performance – the IEEE 802.3bs 400GbE Ethernet technologies, which are currently at the taskforce level in terms of development and standardization.

How you choose to architect your data center networks is primarily dependent on the amount of network traffic you expect to see through your next hardware revision cycle. Certain specific features, such as high-density computing, high-performance computing, fast storage networks, etc, will push your design needs towards faster networking and interconnects. Heavily investing in deploying cloud networking services also lends itself to deploying faster networking capabilities.

Also consider the fact that having fewer faster connections reduce the power and footprint demands of your data center networks, reducing both capital expenditures and ongoing operational expense. You may find that to be the case even in a spine and leaf architecture network. Your decision could also be based on the projected decrease in cost of 100GbE switches as adoption increases. Analysts have suggested that prices will drop as much as 50 percent year to year for the next few years, looking at the deployment and pricing pattern for 40GbE switching.

ethernet fast lane tall
– DCD / Jess Parker

You need accurate capacity planning

So, to a significant extent, accurate capacity planning for the future growth of your data centers will be critical to determine if, and when, you plan to adopt higher-performance networking technologies across the board. Nothing in this planning should prevent specific deployments for customers or projects that have a need for a higher-performance solution.

Standalone projects, or high-density computing deployments that require or benefit from early adoption of networking technologies that are unneeded by the bulk of your data center operations, can get you an early heads up on the benefits and issues you will face when you begin deployment of higher-performance networking backbones.

As the current crop of switches indicate, future products are likely to be compatible with a broad range of network performance, meaning that not just switch gear, but also your choices of cable plant and physical interface, will be important when you plan for your next-generation deployments.

This article appeared in the February 2016 issue of our magazine