Servers in the data center get refreshed on a regular basis, like other equipment such as network switches and power distribution systems, but the one thing that stays constant is the physical infrastructure that houses it all, the rack.

Or does it?

In the last few years many of the largest names on the Internet have started to look at whether the humble rack needs updating for the hyperscale era.

The 19in rack has been with us for some considerable time, with some sources indicating it was originally created to house relay circuits for the rail industry before being adopted by telecoms firms. Later, it was co-opted by the computer industry as a handy ready-made infrastructure solution for housing equipment in the server room or data center.

But with the rise of large Internet companies and their sprawling data centers, there has been a perceived need to adapt rack design for large-scale deployments. In particular, there has been a desire to cram in more compute capacity, and to cut down costs through greater efficiency.

“In terms of the rack, if you look at the 19in standard, which was the only one until the last five years, typically, weight loadings have got higher and racks have got bigger,” said Andy Gill, engineering director at Rittal, a firm specializing in IT infrastructure.

Winds of change

Magazine spread December/January
From the magazine: For more info, or to subscribe go here – Holly Tillier

The first concerted effort at change came from the Open Compute Project (OCP), a consortium of various big names in the industry that was founded by Facebook as a way to jointly develop technology optimized for the data center.

OCP’s Open Rack standard specifies a wider IT equipment space of 21in while maintaining the same 24in column width as a 19in rack, which is driven by standard floor tile pitch. This design allows for three half-width server motherboards to fit side by side, or for a chassis with five 3.5in drives arranged side by side instead of four. It also specifies a slightly bigger rack unit height of 48mm, called an OpenU or OU, which allows for increased airflow for cooling.

A more significant feature of the Open Rack design is a power supply busbar that extends the full height of the rack. This distributes a 12v feed from a dedicated power shelf to every node in the rack, eliminating the need for each individual server to have its own internal power supply. This not only cuts costs, but does away with the power distribution unit and the cluster of power cables taking power to each individual node - and it allows kit to be easily slid in from the front of the rack.

In fact, as DCD has previously noted, the Open Rack design is rather like a blade server architecture scaled up to rack level, but with no proprietary lock-in.

According to Gill, Open Rack hardware is largely being taken up the big hyperscale companies like Facebook and Google, the latter of which joined the OCP about 18 months ago, although there are some enterprises and finance companies adopting it in the US.

The picture is complicated, however. Some OCP projects, such as Microsoft’s Project Olympus server, are designed to fit standard 19in racks. This is because Microsoft needs kit based on these specifications to be able to fit into its existing rack infrastructure.

In fact, until there’s significant demand for a fully engineered Open Rack version of any given piece of kit, there’s been a trend for a lot of equipment to continue in the older 19in form factor, with vendors bolting it to a 21in sled.

“The bottom line is that the standard 19in rack is still the dominant technology, but as you go forward, you could easily see in two to three years’ time that Open Rack will start to take market share. I would estimate somewhere between 15 and 25 percent of the hyperscale market could be using an Open Compute platform of some description by 2021,” said Gill.

With Google now in the OCP, the newer Open Rack 2.0 specifications have also gained a 48v busbar option for power distribution, which Google claims is 30 percent more energy efficient than 12v equipment. This may mean that Open Rack will find favor with telecoms companies, as much of their equipment already runs at this voltage.

Price matters

LinkedIn Open19
LinkedIn Open19 – Sebastian Moss

Meanwhile, a similar rack initiative has been started by LinkedIn, which formed the Open19 Foundation to oversee the development of its specifications. The goals of Open19 are to cut the cost of the physical infrastructure, as well as the time needed to install the actual IT hardware into it.

“We wanted to bring in a situation where we are reducing the cost of the racks. Racks in general are really expensive: look at smart PDUs - they cost thousands of dollars,” said Yuval Bachar, Principal Engineer of Architecture and Strategy at LinkedIn, speaking at an event earlier in 2017.

In contrast to the OCP, Open19 takes as a starting point the need to fit into existing 19in rack infrastructure, and specifies a rack-mount enclosure for this purpose. Dubbed a Brick Cage, this is actually little more than a metal frame into which server, network and power modules simply slot.

These modules - or Bricks - also conform to 19in rack norms. Thus a standard Brick is 1U high and half the width of the rack, so that two can fit side by side, and customers can fill the Brick Cage with any combination of standard, double-width or double-height Bricks.

But the real beauty of Open19 is in the backplane. Each Brick Cage accepts a snap-on cable harness at the rear that forms what Bachar calls a virtual chassis, distributing power and data to each Brick from a power shelf and a network switch fitted into the Cage. This arrangement dramatically cuts the cost, he claimed.

“A typical cable to connect 100GbE to a server is between $60 and $80. In this architecture, we’re sub-$10 per server. Just by that, we knocked down the cost by $70 per server,” he said.

The Open19 Foundation has won backing from over 75 companies, including HPE, Supermicro, Inspur and QCT, partly because the Brick form factor is a good match for existing half-wide motherboards from these firms in many cases.

Because it melds well with existing 19in infrastructure, Open19 may well appeal more to enterprises and mid-size hosting firms than OCP’s Open Rack. However, despite all the updates and enhancements, the 19in rack looks set to remain a part of the data center in one form or another for decades to come.

This article appeared in the December/January issue of DCD Magazine. Subscribe to the digital and print editions here: