Migration to 100Gbps network speeds in enterprise data centers continues to gain momentum. According to data from Crehan Research, 100 and 25 Gbps Ethernet speeds enjoyed a 40 percent year-over-year rise and accounted for 24 percent of total high speed NIC revenue in 2020. The Dell’Oro group predicted that 100Gbps Ethernet switches will make up over 30 percent of the data center switch ports sold over the next five years.
Let’s explore the drivers of this growth plus the secondary effects this migration has on the organization’s IT operations.
Smaller, cheaper, faster...
Of course, part of the answer is obviously the steady march to faster, smaller, and cheaper that includes network infrastructure. Users continue to want the most accessible and responsive performance from the applications they use and online applications continue to demand higher data rates to maintain the performance that people expect.
In addition, I see four main drivers for the growth of 100Gbps speeds:
- Maturity and decreasing cost
The steady maturity of the technology, its surrounding ecosystem, and the declining cost of the underlying components all make it more attractive. The supply of hardware is available to meet the demand for more speed.
- Increasing network loading
Digital transformation has been ongoing for decades. Broad adoption of mobile and a myriad of online services have fueled the number and durations of connections, as well as the amount of data transferred over them. A new catalyst began one year ago that no one predicted - a global pandemic. The Covid-19 pandemic increased the use of ecommerce and online services and an unprecedented number of knowledge workers shifted to working remotely. Many companies have already announced that they’re shifting to permanent remote work for some or all of their employees. All that means much more traffic over the network, particularly for collaboration apps like Slack, Zoom and Jira. This sudden, and perhaps permanent, transformation has two implications that are driving 100Gbps adoption. The first is simple traffic volume – with more companies and workers depending on digital workloads, the amount of traffic has dramatically increased compared to expected trendlines. The other implication is that much of this traffic (like video) is exceptionally sensitive to latency and jitter. Moving traffic faster increases the overall throughput to eliminate bottlenecks.
- Compute-intensive workloads are growing
High-performance computing as well as interactive applications that are sensitive to throughput, latency and jitter such as telemedicine and high-frequency trading will always demand the highest possible speeds – and thus benefit greatly from 100Gbps. Organizations that provide these applications as services have been early adopters.
- AI transformation
This is a specific type of compute-intensive workload that has two parts. The first part is training machine learning models, which is very data and I/O intensive. The faster models can be trained, the faster they can be put into production to provide the intended benefits. With machine learning under the hood of artificial intelligence-powered solutions, these processing and I/O intensive workloads are increasing. In fact, AI has reached a point where it’s being integrated into every major type of enterprise operational technology, including marketing, HR, sales, ecommerce, manufacturing, customer retention, IT operations, etc. This increasingly broad adoption impacts networks in multiple ways that come down to throughput and latency. As you might expect, AI and its underlying ML models are increasing the amount of East-West traffic. AI-powered solutions also often must deliver results in real-time and are built on high-performance architectures to give millisecond responsiveness to a customer interaction. All of this necessitates transmitting data as fast as possible, which at today’s current commercially available standard is 100Gbps.
Secondary effects of 100Gbps upgrades
Organizations must update their network security and performance management tools in lockstep as they upgrade their core networks to 100Gbps. In fact, I recommend upgrading your monitoring fabric as part of the early phase of the upgrade to provide a clear understanding of performance before and after, and to make it easier to troubleshoot the inevitable glitches. Lastly, when it comes to cybersecurity, you can never let your guard down, so make sure your security position remains strong throughout the upgrade.
Security solutions like Network Detection and Response are even more critical at higher data rates because clever criminals attempt to exploit higher and faster traffic by gently tiptoeing through the network. Similarly, for performance management, bottlenecks can have higher consequences and be harder to troubleshoot. Lossless monitoring and the real-time observation of metrics, processing and routing that goes along with monitoring is technically challenging for data packets that go through the monitoring fabric every 6.7 nanoseconds. You must carefully evaluate prospective monitoring solutions for the ability to reliably acquire data packets and observe key performance indicators with the resolution that corresponds to your services and use cases. As an example, if your application cannot tolerate more than 20 milliseconds of latency, then the monitoring solution must observe latency with 10 millisecond resolution or better.
Some parts of an organization’s infrastructure may not need to move to higher data rates for a while or specific devices and tools will remain at either 10Gbps or lower than 100Gbps because the vendors have not upgraded their products. Complex enterprise networks are also rarely upgraded in their entirety all at once. So, you will need a way to bridge between faster and slower data rates, which is something that many network packet brokers can do. They can match the ingestion limitations of devices and tools to extend their useful life.
The factors driving faster data rates listed above will not abate. Therefore, the adoption and standardization on 100Gbps is the next step forward for all enterprises, and it will be in place for years to come. As with all generational changes and upgrades, plan carefully, make sure that your monitoring fabric is an integral part of that plan and put it in place in the early phase of the data rate upgrade.