Hyperscalers do it. Some big enterprises do it too. But how widespread is the use of DC power within data centers?
And more importantly, why are organizations using it?
Why data centers didn’t adopt 48V DC
DC racks have a long history- and if you are not currently using DC power distribution, it is pretty certain that you have encountered it in the past, and may still be using it every day - in your phone.
“Power infrastructure has been somewhat black magic to most organizations,” says My Truong, field CTO at Equinix. “DC power distribution has been around for a long time in telecoms. We've largely forgotten about it inside of the colocation space, but 30 to 50 years ago, telcos were very dominant, and used a ‘48V negative return’ DC design for equipment.”
Telephone exchanges or central offices - the data centers of their day - worked on DC, and our lead image is of a central office in a US military installation.
“A rectifier plant would take AC power, and go rectify that power into DC power that it would distribute in a negative-return style point-to-point DC distribution.”
Those central offices had lead acid batteries for backup and landlines, and the traditional plain old telephone system (POTS) is based on a network of twisted pair wiring that extends right to your home, where it uses a proportion of that DC voltage to ring your phone and carry your voice.
“We actually still see 48V negative return DC style equipment inside Equinix legacy sites and facilities,” says Truong. “We still use -48V rectifier plant to distribute to classic telecom applications inside data centers today, though we don't do much of that anymore.”
When mass electronics emerged in the 1980s, the chips were powered by DC and tended to operate at fractions of 12V, partly as a legacy of the voltages used in telecoms and in the automotive industry.
Data centers adopted many things from telecoms, most notably the ubiquitous 19-inch rack, which was standardized by AT&T way back in 1922. Now, those racks hold electronic systems whose guts - the chips inside the servers – fundamentally all run on DC power. But data centers distribute power by AC.
This is because the equipment going into data center racks had been previously designed to plug into AC mains, says Truong: “Commodity PCs and mainframes predominantly used either single phase or three phase power distribution,” he says.
Commodity servers and PCs were installed in data closets powered by the wiring of the office build, and they evolved into data centers: “AC distribution is a direct side effect of that data closet, operating at 120V AC in the Americas.”
Rackmount servers and switches are normally repackaged versions of equipment that all contain a power supply unit (PSU), also called a switch-mode power supply (SMPS) which is essentially a rectifier that takes mains AC power, and converts it to DC to power the internals of the systems. In the 1990s and 2000s, that is the equipment that data centers were built around.
Around that time, telco central offices also moved away from DC says Truong: “48V was a real telco central office standard and it worked for the time that telco had it, but in the 2000s we saw a natural sunset away from 48V DC, in telco as well. There’s some prevalence of it in telco still but, depending on who you ask, the value isn't there, because we can build reliable AC systems as well.”
AC inefficiencies
In a traditional data center, power is distributed through the building mostly as AC.
Power enters the building as higher-voltage AC, which is stepped down to a voltage that can be safely routed to the server rooms. At the same time, a large part of the power is sent to the Uninterruptable Power Supply (UPS) system where the AC power is converted to DC power to maintain the batteries.
Then it is converted back to AC and routed to the racks and the IT equipment. Inside each individual switch and server, the PSU or rectifier converts it back to the DC that the electronics wants.
But there are drawbacks to all this. Converting electricity from one form to another always introduces power losses and inefficiency. When electricity is converted from AC to DC and vice versa, some energy is lost. Efficiencies can be as low as 73 percent, as power has to be converted to DC and back multiple times.
Also, having a PSU in every box in the rack adds a lot of components, resources, and expense to the system.
Over the years, many people have suggested ways to reduce the number of conversions - usually by some form of DC distribution. Why not convert the AC supply to DC power in one go, and then distribute DC voltages to the racks? That could eliminate inefficiencies, and potentially remove potential points of failure where power is converted.
Efforts to distribute DC through the building have been controversial. DC distribution within the racks means a significant change from traditional IT equipment, but the idea has made strong progress.
Enter the bus bar
In traditional facilities, AC reaches the racks and it is distributed to the servers and switches via bars of electrical sockets into which all the equipment is plugged. These are often referred to as power distribution units (PDUs), but they are basically full-featured versions of the power strips you use at home.
In the 2000s, large players wanted to simplify things and moved towards bus bars inside the racks. Rectifiers at the top of the racks convert the power to DC, and then feed it into single strips of metal which run down the back of the racks.
Servers, switches, and storage can connect directly to the bus bar, and run without the need for any PSU or rectifier inside boxes which fed from rectifiers at the top of the rack. These effectively replace the PSUs in the server boxes, and distribute DC power directly to the electronics inside the server boxes, using a single strip of metal.
It’s a radical simplification, and one which is easy to do for a hyperscaler or other large user, who specifies multiple full racks of kit at the same time for a homogeneous application.
Battery backup can also be connected to the bus bar in the rack, either next to the power shelf or at the bottom of the rack.
At first, these were 12V bus bars, but that is being re-evaluated, says Truong: “Some hyperscalers started to drive the positive 48V power standard, because there was a recognition - at least, from Google specifically - that 48V power distribution does help in a number of areas.”
“48V power distribution is evolving and becoming very necessary in the ecosystem,” says Truong. “The 12-volt designs are out there but have their own set of challenges on the infrastructure side.”
There’s a basic reason for this: Power is the product of voltage and current (P=VxI), but voltage and current are also related by the resistance by Ohm’s Law (V=IR).
In particular, at a high voltage, less energy is consumed by the resistance of the bus bar, because less current is needed to deliver the same power. The power loss in the bus bar is proportional to the square of the current (P=I2R).
The Open Compute Project (OCP), set up by Facebook to share designs for large data centers between operators and their suppliers, defined rack standards, The first Open Rack designs used Facebook’s 12V DC bus bar design, but version 3 incorporated a 48V design submitted by Google.
Meanwhile, the Open19 group, which emerged from LinkedIn, produced its own rack design with DC distribution, but used point-to-point connectors - these are simplified snap-on connectors going from a power shelf to the servers but are conceptually similar to the connections within an old-school central office.
Racks go to 48V
Like OCP, Open19 started with 12V DC distribution, and then added 48V in its version 2 specification, released in 2023.
Alongside his Equinix role, Truong is chair of SSIA, the body that evolved from Open19, and develops hardware standards within the Linux Foundation. Truong explains the arrival of a 48V Open19 specification: “One of the things that we thought about heavily as we were developing the v2 spec, was the amount of amperage being supplied was maybe a little bit undersized for a lot of the types of workloads that we cared about. So we brought around 48V.”
Also, power bricks in Open19 now have 70 amps of current carrying capability
Both schemes put the power supplies (or rectifiers) together, in a single place in the rack, which gives a lot more flexibility: “Instead of having a single one-to-one power supply binding, you can effectively move rectifier plants inside of the rack in the ‘power shelf,’” he says. “We can have lower counts of power supplies for a larger count of IT equipment, and play games inside of the rectifier plants to improve efficiency.”
Power supply units have an efficiency rating, with Titanium being the sought-after highest value, but efficiency depends on how equipment is used. “Even a Titanium power supply is going to have an optimum curve,” says Truong. Aggregating PSUs together means they can be assigned to IT equipment flexibly to keep them in an efficient mode more of the time.
“It gives us flexibility and opportunity to start improving overall designs - when you start pulling the power supply out of the server itself and start allowing it to be dealt with elsewhere.”
Component makers are going along with the idea. For instance, power supply firm Advanced Energy welcomed the inclusion of 48V power shelves: "Traditionally, data center racks have used 12V power shelves, but higher performance compute and storage platforms demand more power, which results in very high current. Moving from 12-volt to 48-volt power distribution reduces the current draw by a factor of four and reduces conduction losses by a factor of 16. This results in significantly better thermal performance, smaller busbars, and increased efficiency."
DC kit is available
One of the things holding back DC racks is lack of knowledge, says Vito Savino of OmniOn Power: “One of the barriers to entry for DC in data centers is that most operators are not aware of the fact that all of the IT loads that they currently buy that are AC-fed are also available as a DC-fed option. It surprises some of our customers when we're trying to convert them from AC-fed to DC-fed. But we’ve got the data that shows that every load that they want to use is also available with a DC input at 48 volts.”
Savino says that “it's because of the telecom history that those 48V options are available,” but today’s demand is strongly coming from hyperscalers like Microsoft, Google, and Amazon.
DCD>Academy instructor Andrew Dewing notes that servers made to the hyperscalers’ demands, using the Open Compute Project standards, are now built without their own power supplies or rectifiers. Instead, these units are aggregated on a power shelf.
This allows all sorts of new options and simplifications and also exposes the historic flaws of power supply units, which Dewing calls switched-mode power supplies or SMPS: “The old SMPSs were significantly responsible for the extent of harmonics in the early days of the industry and very inefficient. OCP, I'm sure, will be provisioning power to the IT components at DC with fewer and more efficient power supplies.”
Truong thinks this could change things: “Historically, when a power supply [ie a rectifier] was tightly coupled to the IT equipment, we had no way of materially driving the efficiency of power delivery. Now we can move that boundary. You can still deliver power consistently, reliably, and redundantly on a 40-year-old power design, but now we can decouple the power supplies this gives us a lot of opportunity to think about.”
Going to higher voltage also helps within the servers and switches, where smaller conductors have the same issues with Ohmic resistance. “It is magnified inside of a system,” says Truong. “Copper traces on an edge card connector are finite. Because you have small pin pitches you are depositing a tremendous number of those pins, a tremendous amount of trace copper, to be able to move the amount of power that you need to move through that system.”
He sees 48V turning up in multiple projects, like OCP’s OAM, a pluggable module designed for accelerators that has a 48V native power rail option inside it: “I think that that could be another indicator that 48V power architectures across the industry are going to become prevalent.”
Truong predicts that even where the power supply units remain in the box, “we probably will see an evolution on the power supply side from 12V to 48V over the next few years. But that's completely hidden from anybody's visibility or view because it sits inside of a power supply.”