So far this year, 5G has become available in market after market. Many providers are aiming to have coverage for 90 percent of their users within the first year of rollout, a shift that will open up entirely new avenues for IoT applications, and therefore more data.

Meanwhile, video streaming continues its precipitous growth: we’re streaming more than ever and our expectations for resolution (4K, anyone?) are constantly on the rise.

For data centers, this means it’s not too early to prepare for 400Gbps networking. Note the use of “prepare”: even if you’re not ready to upgrade to 400Gbps infrastructure right now, knowing what’s coming should inform any upgrades you do make in the coming months and years.

Whatever the current capacity of your data center, here are three areas to consider as you prepare for 400Gbps.

Networking switch
– Thinkstock / kynny

1) Power: ensure adequate energy

Unsurprisingly, as you transition from 40Gbps or 100Gbps to 400Gbps, you’ll need more electricity. The increase won’t be linear - 400Gbps doesn’t require four times as much power as 100Gbps - but your power needs will be greater as you scale up.

In addition to the energy required to move and process the actual data, you’ll need additional cooling capacity with 400Gbps. The equipment that powers 400Gbps will generate more heat than your current equipment, so your data center’s HVAC system will have to be able to keep up.

As you’re updating your data center, be sure to provision the ability to run more electricity and provide more cooling than you need today to prepare it for increased future capacity.

2) Networking: choose the best fit

The important thing to know is that there’s more than one way to get to 400Gbps; the right configuration will depend on your current setup, the size of your data center, your budget, and your current and future capacity needs.

At short reaches of <100m, there are parallel multi-mode fiber (MMF) options for 400Gbps but no copper options are standardized at this time (there is a 200Gbps copper standard for <3m). Check to see what is supported by your preferred switch vendor.

For intermediate length (<500m) 400Gbps runs, a good option is parallel single-mode fiber (SMF) links. The fiber installation costs will be higher up front but you will save a lot on the transceivers at each end. This type of fiber link is also a good option for intermediate length 100 Gbps links as well. Advances in silicon photonics have brought the price of parallel SMF links down a lot in recent years.

For much longer links (say, between buildings or up to your Internet Service Provider), you’ll likely need standard SMF pairs.

Above 25Gbps, you’ll move from the SFP+ transceiver form factor to the QSFP (quad SFP, which accommodates four or more fibers in each direction instead of one). The QSFP can accommodate capacities of 40Gbps and 100Gbps. Beyond that, as you prepare for 200Gbps and 400Gbps, you’ll need to use the newer form factors like OSFP and QSFP-DD (though they’re similar to the QSFP). Keep in mind, work is underway to try and fit 100Gbps within an SFP+ sized transceiver with the SFP-DD transceiver form factor effort.

One consideration for the larger transceivers is that, because they’re bigger, you can’t fit as many on a faceplate - 32 ports instead of 48. This will matter if you’re highly space-constrained - e.g., if you’re using a colo facility or otherwise paying per rack or square footage.

This isn’t to say you should avoid the QSFP form factor - to enable higher speeds and increased bandwidth, you will need it. The key is to understand how you can cost-effectively outfit your data center today to accommodate current needs while also making decisions that will let you seamlessly transition to equipment that will meet your future needs.

3) People: train your team

Make no mistake: as you upgrade your equipment to prepare your data center for 400Gbps, you will also need to train your team to handle this equipment. Cat 5 twisted pair copper cables, which are typically used for lower-speed connections, are much more resistant to bending and crimping than the twin-ax copper cables and fiber-optic cables needed to power a 100Gbps or 400Gbps data center.

For example, while it’s common to use zip ties to hold together twisted pair copper cables, ties that tight may damage twin-ax copper and fiber-optic cables (which is why many data center managers use Velcro now).

Similarly, data center technicians used to be able to polish the ends of single-strand MMF if they got scratched. But with parallel SMF, polishing could cause serious damage. SMF fiber cores are so small that a single molecule of smoke can block the light.

This means, of course, that you can’t just upgrade your data center and expect your staff to handle it perfectly. You’ll have to train your technicians so they know how to handle and manage the new cables. You may have to implement some contamination mitigation (gloves, face masks, etc.) and employ new fiber cleaners and inspection protocols to prevent the kinds of barely visible problems that can seriously slow or harm your equipment’s performance.

The future is faster

Even if 400Gbps isn’t in your data center’s immediate future, it’s important to be aware of how quickly demand for data is increasing and to make changes today that lay the groundwork for an even more data-hungry future.

Emerging technologies like 5G have the capacity to change our lives as much as the smartphone did; as innovators develop systems that rely on new technology, our data needs will only increase. Preparing now to keep increasing your data center’s capacity will position you to meet demand in the years to come.