Electrical power enters data centers via transformers and is then routed round the facility. But the ways power gets to the servers and switches have been changing - largely because of the evolution of servers and switches.

Servers and switches have been getting more powerful, and are installed in racks which are cooled by air. That’s meant more power cables and that’s a problem - not just because of the time and effort involved in complex cabling.

Cables sitting under the raised floor deliver energy - but at the same time obstruct the air that removed that energy once it had been consumed and turned to heat.

To improve this, power can be delivered from overhead, either from cables in trays or (increasingly) through busways which support flexible connectors down to the racks, where power distribution units (PDUs) provide power outlets for individual switches and servers.

In the last few years, there’s been a movement to simplify the way power is distributed within racks - and higher densities, along with the move to the cloud, could drive further changes.

This feature appeared in the November Power Supplement. Read it for free today.

Reducing complexity

About ten years ago, Facebook set up the Open Compute Project (OCP), to share its designs for data center hardware, and allow others to improve it. Facebook, and other OCP founders, are hyperscale players, with large data centers running monolithic applications. A really standardized rack system suits them, so at the time OCP members designed a new “Open Rack,” which held more equipment (in 21 inches instead of 19), and replaced the PDU with a DC busbar, a copper connector similar to the overhead busway, which goes down the back of the rack.

Menno Kortekaas of Circle B likes the simplified power distribution in OCP racks, but his customers are much smaller than most OCP users, and they need his help.

Kortekaas runs a room full of Open Compute equipment in the Maincubes AMS01 data center in Amsterdam. It’s a refurbished space, and so is some of the kit, he says: “There’s a few racks of renewed equipment that’s come back from Facebook,” he says, provided by circular economy player ITrenew.

“Customers are receptive to using DC power distribution instead of PDUs,” he tells DCD. “As long as the server gets power, they don’t mind.” Working with OCP kit does require care and understanding, but he thinks traditional PDUs are vulnerable to errors. “We also have some 19in racks, and when one network card failed, we went in to change it, and we turned off the server by mistake. Luckily it was redundant.”

OCP racks are different, and that makes them specialized - unless you happen to have a big data center full of them. “The takeup of OCP-powered racks depends on skill,” he says. “Companies big enough to build their own data center don’t need me.” His customers have between 6kW and 11kW in each rack, and the racks are put together by Rittal, with Circle B handling the installation.

“We provide remote hands,” he says. “If there is anything wrong, they log in and we fix it. They don’t have to have any specific hardware knowledge.”

Perhaps because OCP equipment is that specialized, 2017 saw another group develop a rack power distribution alternative. This time, one designed to appeal to mid-size companies.

Open19 was launched by Yuval Bachar, who was LinkedIn’s chief data center architect. He spearheaded a move by the social media company to commission its own network hardware and design its own infrastructure, in order to save money - and then set up the Open19 Foundation to share that design with other users.

“The main difference between OCP and Open19 power distribution is shared versus dedicated,” says Bachar, who is now working on data center efficiency at LinkedIn’s new owner, Microsoft.

“In OCP, power is distributed through a busbar, and the whole rack shares that busbar,” he says. “Any fault that happens on it will knock down the whole rack. In Open19 racks, each server is fed directly from a power shelf which provides low voltage DC.”

The Open19 power shelf delivers power at 12V to cages for servers and switches. The IT hardware “bricks” have no power supply and slot into these cages, where they clip onto the power bar.

Because servers are powered individually. Open19 racks can include a level of server monitoring that is not possible with OCP - and which OCP’s typical users don’t need - says Bachar: “The main difference is between a shared environment for power distribution versus a dedicated environment.”

It all depends on what building block you deal with, says Bachar. Facebook has tens of thousands of racks, so it manages at the rack level, and can reboot a whole rack if necessary. “In Open19, every server counts - that‘s why we created it.”

Dedicated feeds in Open19 can monitor and control servers in the traditional way, while those in an OCP rack can only be managed with a daemon on the server itself.

The OCP-style rack-level busbar isn’t even right for all OCP members, even those with their own hyperscale services. OCP implementations at Facebook and other hyperscale companies have diverged, and we understand that Microsoft’s own implementation of the OCP system eschews the busbar in exchange for more dedicated control.

Open19 contributed its specifications as a standard within OCP, but it’s unclear at this point which OCP members, if any, see the need for it.

An open future

Open19
– Open19

Open19 itself has had low visibility for the last year or so. Microsoft bought LinkedIn in 2016, and in 2019 announced that LinkedIn would be moving away out of its own data centers, and onto the Microsoft Azure cloud.

In 2020, when Covid-19 made travel impossible, the Open19 Summit was canceled outright, instead of moving online. The equivalent OCP event happened online, and some drew the conclusion that Open19 had folded.

But these rumors are exaggerated. Open19 still has the distinctive features that Bachar points out, and LinkedIn still uses many Open19 racks. LinkedIn’s move to Azure will take some years, leaving it on Open19 racks for some time to come. And meanwhile, a new champion for the Open19 rack and power distribution system is emerging.

In 2019, given his changing responsibilities at Microsoft, Bachar handed over the presidency of Open19 to Zachary Smith, CEO at Packet, a network company which delivered bare metal services using Open19 racks - a similar concept to the way Circle B plans to deliver infrastructure as a service on OCP racks.

Packet became the most public proponent of the Open19 rack design. But in 2019, colocation giant Equinix bought Packet, and its future seemed unclear. Most of its customers take space in facilities, and many of them offer cloud services. Would Equinix step into IaaS?

Late in 2020, it became clear that was in fact the case. Equinix relaunched the Packet service under the brand Equinix Metal. Zac Smith is now leading that part of Equinix, and he predicts a massive boost to the use of the standard.

Smith thinks Open19 is ideal for a business that wants to quickly provision any amount of IT resource for enterprise customers in an IT environment which has been pre-wired for power and networking. The magic of it is that it’s pre-plugged and commoditized, but flexible and manageable down to the level of individual components.

“Most Equinix customers are not running a giant server farm of a million servers, where it’s okay if some go down. That’s not the scenario of most enterprises, especially in a distributed world,” he tells DCD.

For users of space at a colocation facility, getting hardware installed is key, but so is the ability to change and manage it once it is there. And for customers with hardware in multiple data centers, working at remote locations is an issue.

In recent years, some enterprises have moved to a fully pre-loaded rack, in a system called “rack and roll,” where racks are pre-integrated with all their wiring, servers and switches at a special center and then shipped to the data centers, where they are installed - cabled and ready to use.

Open19
– Open19

But there are problems when you examine this concept, says Smith: “Let’s piece apart a standard rack. Let’s say you’re not even doing crazy density, you’re just doing 40 servers per rack, with redundant power per server, and 2 x 25Gbps network ports. We’re talking about five cables each just for the servers, so you’ve got well over 200 cables at the back of your rack.”

Integrating a row of racks in advance, off-site, makes your data center very inflexible: “You put all your capital in, between a half million and a million dollars in silicon and memory, and wheel it in. And then you hope you never change it. Because the second you have to have some remote hands tech in Melbourne go touch it, it becomes a mess.”

The alternative means very expensive technicians have to visit that data center and cable everything up on-site. “That’s a very, very high cost per server, because you have no efficiencies when you’re in a data center doing system integration with ten servers.”

The Open19 approach disaggregates that, building the power and network cables into the rack before it arrives, but leaving the expensive technology to be slotted in without any expertise, once the racks are in place.

“It basically says, what if you could deploy a small amount of capital, your sheet metal and cables, at a different time from when you do your big capital, your CPU and your memory,” says Smith. “We’re talking thousands of dollars, not hundreds of thousands of dollars, to have your cabling and sheet metal done right - and then to add your expensive parts more incrementally, more just in time.”

That’s actually a neat summation of the same benefits which Menno Kortekaas promises with his refurbished OCP kit in Maincubes. His remote hands in Amsterdam are a smaller version of the armies of Equinix technicians with which Smith plans to deploy Equinix Metal. Both systems offer pre-wired infrastructure on demand.

The two models will also come close physically, because Amsterdam is one of the first four markets where Equinix is offering Metal.

Equinix bought a data center in Amsterdam in 2019. At the time, Switch Datacenters’ AMS1 was the home of Circle B’s first OCP Experience Center, and the purchase is the reason Circle B move to its current home in Maincubes.

If IaaS based on pre-wired racks takes off, then one model (Open19 wiring) could replace another (OCP busbars) in what used to be OCP’s main shop window in Europe. Kortekaas will smile wryly.