When it comes to data center cabling, fiber optic may have the edge in speed and length, but copper wiring and interconnects are still very much alive and kicking. What’s more, they are likely to remain firmly embedded in some corners of data center infrastructure for many years to come.
“People may be trying to kill copper, but it is not dying off,” says Shain Walsh, senior vice president of corporate marketing and development at Emulex, which manufactures a range of Ethernet adapters and fiber channel converged network adapters under its own brand and those of Cisco, EMC, HP, HDS, Huawei and others.
Copper is proven
“The last data point we have is from a host connectivity survey from Crehan Research, which showed that just over 20 percent of the host 1Gbps ports are still copper. The cost and the proven nature of the Category 6 (CAT6) cabling continues to be in demand by data centers, and the cost delta between dealing with small form factor (SFP) [fiber transceivers] and optical cables is what’s driving that. The other pragmatic part is that most people do not need the full length of optical,” says Walsh.
Research from Dell’Oro Group predicts that 80 percent of server connections will be 10GbE-based by 2018, with most intra-rack connections linking servers to storage, and leaf switches using some form of copper cabling over the short distances required.
That cabling is likely to be either twinax 10GBase-CR in the direct attached cable (DAC) format, which comprises a fixed length cable with SFP+ plugs integrated into both ends, or 10GBase-T using twisted pair CAT6 or CAT6A copper cabling, which pushes data at speeds of 10Gbps up to 100m and is backwards-compatible with 1GbE cables, meaning no wiring upgrades are required.
And with network interface cards and 10GBase-T Lan-On-Motherboard (LOM) equipped servers widely available, Dell’Oro Group believes the next few years will see increasing numbers of enterprise scale data centers in particular, accelerating their 10GBase-T deployments.
The research company predicts that 10GBase-T port shipments will double those of SFP+ DAC by 2017, primarily driven by the vast installed base of legacy 1Gbps u
UTP links between the server and top-of-rack switches being refreshed.
Not all data centers are the same, and precise upgrade paths are difficult to predict in every case because not every data center is the same. Some place much higher bandwidth, latency and throughput demands on their cabling infrastructure than others, depending on their business. Many facilities deploy different environments for blade servers and box servers, and wiring upgrades are often tackled one step, row, or rack at a time rather than undertaken as complete redesigns, which usually results in a broad mix of different technologies being deployed concurrently.
We see copper as one of the main interconnects used inside the rack at 100GbE
Arlon Martin, Mellanox
There is certainly a sense that the broader data center market is becoming segmented as the requirements of Web 2.0 companies such as Google, Facebook and Amazon, and cloud service providers and telcos running hyperscale facilities force a faster migration to 25GbE and 40GbE server interfaces that already rely on 10GbE fiber optic cabling and interconnects between top-of-rack and aggregation switches, for example.
Leading-edge data centers may have to move to fiber in order to get the benefits of newer technologies. “The early adopters will go fiber first because those standards tend to get written first, but there is a difference in the way people use their data centers,” says Carrie Higbie, global director of data center solutions and services at network cabling specialists the Siemon Company. “If you take an ISP or Google, or Facebook, their revenue is their data center, so their cost base for that business is significantly different from that for a bank, for example,” says Higbie.
Different architectural approaches may affect the choice of cables. For instance, some approaches replace top-of-rack switches with one switch at the end of each row to cut the power wasted in underutilized switch equipment. These configurations may swing the choice between UTP, DAC or fiber optic cables, particularly in small data centers for which centralized switching represents the least expensive option, and where server virtualization is not as widely used.
Cost is a factor
Clearly, cost is a big factor in any data center cabling upgrade decision, but other factors – including availability, standardization, power consumption, distance and cabling media characteristics – also play their part. The IEEE’s next-generation 40GBase-T standard is currently being defined to run at 40GBps on some form of Category 8 structured copper cabling system, with greater shielding to prevent crosstalk and boost signal distance. But it could be another five to seven years before suitable products reach acceptable price points and energy consumption levels.
“I think [40GBase-T] is a year-and-a-half out or so, and with the first round equipment power will be a concern, so it will take a good three to five years before the stuff becomes commercially available and affordable,” says Higbie. “You will have the bleeding-edge users that will drop in, but some of those guys are using fiber 40GbE today. They will be the first to push the envelope, but that is not the typical, average company’s data center by any stretch.”
Elsewhere, many vendors have developed 25GbE and 50GbE SFP+ over DAC in rack interconnects, and top-of-rack switches that are able to truncate multiple ports to offer 100GbE and 200GbE speeds using four lanes of fiber or copper pairs in the future.
“We see copper as maybe not the principal but one of the main interconnects used inside the rack at 100GbE,” says Mellanox senior director for marketing, Arlon Martin, who believes an 8m length could be enough to satisfy 98 percent of in-rack cabling requirements. “In the large data center space, it will be the predominant form of in-rack 100GbE, the reason being that most servers will move from 10GbE to 25Gbe or 50GbE ports using the QSFP form factor.”
But in smaller hosting facilities, the cost and management advantages of UTP will keep 1000Base-T and 10GBase-T wiring in demand for some time to come. “Most people did not think copper would be a strong player in 10Gigabit, and it proved to be,” says Walsh.
“I don’t think copper is dead,” says Higbie. “If 10GBase-T is going to get into most data centers, even those that are highly virtualized, it is going to stay there for a good while.”
|Speed (Gbps)||Distance (m)||Name||Standard/Year||Cable required|
|1||100||1000Base-T||802.3ab 1999||At least Cat 5|
|10||100||10GBase-T||802.3an 2006||Cat 6a cable|
|40||>30||40GBase-T||802.3b1 (draft)||Proposed Cat 8 shielded|