Archived Content

The following content is from an older version of this website, and may not display correctly.

It is a little-known fact that the specification used as standard across the majority of data center racks on the market was created in the 1950s for mounting railroad signaling relays. And you may not have heard much about it had Facebook not reached its current scale.

When you are deploying servers at Facebook’s rate, little imperfections that at a smaller scale may not cause much fuss can become a major issue. Frank Frankovsky, director of hardware design and supply chain at Facebook, says about six months ago he realized that this was the case with the racks the company was deploying at its data centers: at scale their deployment became problematic.

Some equipment did not fit into the racks properly. Some chassis, for example, would protrude into the hot aisle. Varying rack heights made it difficult to have complete hot-aisle containment. There were physical issues with cabling and other problems.

Besides a few specs from the aforementioned 60-year-old standard, there has not been any real standardization in data center racks, Frankovsky says.  “When you deploy at [Facebook’s] scale, it just becomes really problematic for a data center technician.”

Open Compute takes on racks

The problem had to be addressed, so Frankovsky and three other engineers at Facebook decided to design a new rack from the ground up, optimizing it for massive-scale data center deployments. They have open-sourced the design effort and development of a new rack standard, Open Rack, through Facebook’s Open Compute Project, which applies the open-source-software model to IT hardware and data center design.

What they have come up with so far is a rack that is slightly taller than usual, has a wider space for IT equipment and a cable-less power distribution system that is a radical departure from the norm.

The project is still in the development phase and Facebook has not deployed any racks built to Open Rack design at its data centers. What they have built is a prototype they call R2D2. It is a short 12U version of the Open Rack. “We’re using that to send out to all the suppliers who are designing Open Rack-compatible systems,” Frankovsky says.

HP and Dell have both publicly announced server and storage-array designs compatible with Open Rack, and one of the world’s largest hosting-services providers Rackspace has  also decided to standardize racks in its data centers on the open-source design, Frankovsky says.

APIs for hardware

Essentially, what they have tried to create is a standardized interface between IT equipment and the rack, as well as an interface between the rack and the data center. Frankovsky likes to refer to these interfaces as “hardware APIs”, again borrowing a concept from the software-development world. An API, or Application Programming Interface, is a piece of software that enables different applications to talk to each other, making them easier to integrate.

In Frankovsky’s mind, a technician installing a piece of hardware in a data center should not have to spend a lot of time doing it. “You should be able to walk up to that rack, plug it in and walk away,” he says. This is one of the key goals of Open Rack.

This does not mean dimensions of the IT equipment have to change. The Open Rack’s equipment bay has been widened from 19 inches to 21 inches, but it can still accommodate standard 19-inch equipment. A wider 21-inch bay, however, enables some “interesting configurations”, like installing three motherboards or five 3.5-inch disk drives side-by-side in one chassis. The outer width of the rack has remained a standard 24 inches to accommodate standard floor tiles.

Cable-less power distribution

The Open Rack’s power-distribution system could not be more different from the norm. The rack is split into multiple “power zones”. Capacity of each zone can be different for different users, but Facebook’s spec has three power zones per rack, 4.2kW each. Each zone is powered by a 3U power shelf containing six server power supplies. Instead of having their own power supplies, servers plug into bus bars at the back of the rack, connected to the power shelves.

Divorcing the power supply from the server is just the beginning. Frankovsky and his team ultimately want to be able to change out individual components of each server. Typically, entire racks get decommissioned along with servers when it is time for a hardware refresh, he says. But incremental performance improvements are usually driven by new CPUs, while things like Ethernet cards and memory can perform just fine over multiple generations of CPUs.

Frankovsky eventually wants to be able to upgrade just the components that need to be upgraded, leaving the rest in place. Open Rack is going to play a major part in this. The top-of-rack network switch, for example, will turn into more of an “I/O appliance” that incorporates things such as NICs and boot drives, continuing to strip down the servers.

“We’re currently in early-brainstorming mode on stuff like that, but I think the technologies are going to be within our reach within the next 24 months,” he says.

Giovanni Coglitore, one of the key engineers at Facebook working on Open Rack, says one of the benefits expected from the project is lower cost of ownership for each compute component.

“Each component gets replaced according to its own life cycle, which can be up to 10 years in some cases,“ Coglitore says in a blog post on Opencompute.org.

“This disaggregation of compute components (CPU, hard drives, NICs) improves efficiency and reduces the amount of industrial waste.“

Frankovsky announced the Open Rack project at the Open Compute summit at Rackspace headquarters in San Antonio, Texas, in May. Also at the summit, he announced that Facebook engineers had been collaborating with Baidu, China’s answer to Google, and Tencent, another major Chinese Internet-services company and a Baidu partner. 

Today, Coglitore holds Open Rack project meetings every other week, and, according to Frankovsky, Open Rack has become Open Compute's most actively worked on project, hardware management being the second most active one.

This article first appeared in FOCUS magazine. To register for FOCUS digital editions, click here.