The disaggregated-rack architecture is more than a side project at Intel Corp. Called “rack-scale” in company lingo, it is part of the company’s vision of the future of IT.
The reference architecture Intel is working toward is the ideal software defined data center. A rack where no particular CPU, NIC, memory card, power supply or a fan serves one specific server. Using software, the operator provisions just the number of CPU cores and just the amount of memory and bandwidth a particular application needs.
Eric Dahlen, an Intel architect deeply involved in the engineering effort, took a group of IT analysts and journalists on a deep dive into the future as Intel sees it at a company event in San Francisco in July.
All hands on deck for fabric
From Intel’s perspective, the technological problems of shared power, shared cooling and common management resources among server nodes in a rack has been solved, Dahlen says. “The efficiencies are pretty good now in terms of rack-level power delivery and rack-level cooling.”
Most of Intel’s brain capital dedicated to rack-scale architecture today is focused on the fabric. That is because aggregating different elements of the server and interconnecting them inside the rack will turn the interconnect into a big bottleneck.
The technology promises a number of benefits to the data center operator, all of them core to any data center’s mission: massive improvements in power utilization, 50% increase in node density per rack and more bandwidth from CPU to the network.
Better node density and better power utilization result from shared central power and cooling resources. The same is true for increased bandwidth. Instead of having a NIC (Network Interface Card) in every node, connected to the server via a network cable, Intel’s proposition is to connect the node directly to an integrated NIC and a switching infrastructure in the rack.
Dahlen showed a sample design, where multiple SoCs in a chassis are connected to a “switch mezzanine” through PCIe. The PCIe is already built into the nodes, so comes at no additional expense. The only added cost, in this case, is the cable that connects the chassis to the rest of the rack and the rest of the data center.
This way, the data center operator does not have to spend a lot upfront to provision enough bandwidth for the worst-case scenario. They can decide how much interconnect they need out of the tray, once they have run some workloads and know how much network traffic their particular application generates.
With a traditional architecture, if you want 10GbE, you have to put in a 10GbE NIC and run a 10GbE network cable to every server. And those 10GbE ports at the top of the rack cost a lot of money, Dahlen says.
Photonics must become a lot cheaper
Intel is betting on silicon photonics becoming cost effective to make this vision a reality. Today, photonics is a very expensive technology, but Intel is working with cable companies to get that cost down.
What makes photonics expensive is complicated technology used to make sure light travels down a precise path within cards and between fibers of mated optical cables. Getting photonics into silicon will take care of the former. To address the second issue, Intel is working on something called the “telescoping optical connector.” When two optical cables are mated, even a slight misalignment between fibers creates loss of optical intensity. Intel’s connector “telescopes” a 40-micron optical footprint at the mating point up to an effective 170-micron footprint. This way, most optical intensity gets from one fiber to the other even with a slight misalignment, without the use of expensive alignment technology.
What’s in it for Intel?
Intel has not been known as a big rack company exactly, so what’s in it for them? Theoretically, it will be easier to sell both Xeon CPUs and SoC (System on Chip) cards, Dahlen explains.
With a rack architecture like the one Intel is envisioning, where the rack is basically turned into a massive chassis, a customer can try some SoC-based microservers and easily swap them out for Xeon-based servers, he says. They will be able to scale from Atom SoCs up to the most powerful Xeons and back within the same rack.
A version of this article first ran in the latest edition of DatacenterDynamics FOCUS magazine.