Data centers bring together a large number of servers in one place and run applications on them. Whether they are enterprise, colocation, or cloud data centers, they have to operate 24x7 to support those mission-critical applications so, as data centers emerged, the first priority was to build in reliability.

Once the reliability task was done, costs and efficiency came to the fore. Those early data centers were over-engineered and over-cooled to ensure reliability, but it quickly became apparent that more than half the energy they consumed went into keeping the hardware cool, and less than half was actually used in the computation.

Ten years of working on the efficiency of cooling systems have given us a current generation of facilities with a power usage effectiveness (PUE) of 1.2 or less, meaning more than 80 percent of the power they use is burnt in the servers themselves.

This feature appeared in the March issue of DCD Magazine. Subscribe for free today.

– Nlyte

Power to the server

So now, it’s time to start looking in more detail at the power used by servers, as a major component of the energy used by data centers. In February, the Lawrence Berkeley National Laboratory co-wrote a report commissioned by the US Department of Energy, which revealed some interesting statistics.

Firstly, the study confirmed an oft-quoted rule of thumb, that data centers now consume a small but significant part of global energy. However, the word on the street has been cranking up to around two percent, the DOE report reckons it was closer to one percent in 2018.

That sounds like a manageable figure, but it masks areas where data centers have become a burden. For instance, Ireland is facing a boom in data center building and has a limited ability to grow its grid. The Irish Academy of Engineering has predicted that in 2027, 31 percent of all power on the grid will go to data centers.

Secondly, and more interestingly, the report shows that this overall figure is not growing as fast as some had feared.

Over the past decade, things have dramatically changed. In 2018, data cen­ter workloads and compute instances increased more than six-fold compared to 2010, yet power usage only went up by six percent.

Performance and cost improvements

“Increasing data center energy efficiency is not only an environmentally friendly strategy but also a crucial way of managing costs, making it an area that the industry should be prioritizing,” Jim Hearnden, part of Dell Technologies’ EMEA data center power division, told DCD. “Most IT managers are keen to increase their energy efficiency in relation to their data center, particularly when doing so also helps improve performance and reduce cost.”

It’s clear that data centers have seen huge efficiency gains - and as one would expect from the PUE figures, the majority of these have been in the cooling side of the facility. But during that same eight-year period, server energy consumption went up by 25 percent.

That’s a substantial increase, although it’s a much smaller uptick than the six-fold increase in workloads the study noted. It’s clear that the server is also getting more efficient, gaining the ability to handle higher workloads with less power.

Much of this is down to more powerful processors. We are still in the era of Moore’s Law, where the number of transistors on a chip has been doubling every two years, as predicted by Gordon Moore, the one-time CEO of Intel.

More transistors on a chip mean more processing power for a given amount of electrical energy because more of that computation can be done within the chip, using the small power budget of on-chip systems, without having to amplify the signals to transmit to neighboring silicon.

Moore’s Law implies that the computational power of processors should double every 18 months, without any increase in electrical energy consumed, according to an observation by Moore’s colleague David House in 1975.

As well as in the processors, there’s been waste energy to be eliminated in all the components that make up the actual servers in the data centers.

Supermicro makes “white-label” processor boards used by many large data center builders, and it has been hard at work to shave inefficiencies off its servers, according to Doug Herz, senior director of technical marketing at the company.

“The data center’s electric power consumption in the US has started to flatten off,” he told DCD in an interview. “It’s not going up that fast due to a number of energy-saving technologies. Despite people doing more, they are doing it with less electric power.”

Supermicro has spotted the part of the puzzle where it can help: “Manufacturers have not focused on idle servers and their cost,” Herz said. “And newer management software can aid in keeping that consumption down.”

Idle power

A five-year-old server can use 175W when it is idle, which is not that much less than when it is in use. Idle server power consumption has improved over recent years, but still, Herz estimates that data centers with idle servers can be wasting a third or even half of the power they receive.

Newer management software can balance workloads, distributing tasks so servers spend less time idling. “This software is used not only to monitor the servers in your data center but also to load balance the servers in your data center and optimize the electric power,” Herz said.

“If you have a set amount of workloads that you have to distribute over a certain number of servers in your data center, maybe there are more efficient ways to go about it. Try optimizing the servers in your data center so that you're running some of them at full capacity. And, that way you're able to get economies of scale.”

Further up the stack, it’s possible to optimize at a higher level, where the server power use shades over into the cooling. For instance, French webscale provider OVH takes Supermicro boards and customizes its servers, with specially-adapted racks and proprietary water cooling systems. Small watertight pockets are placed on hot components to conduct heat and transport it away.

“It makes good business sense,” OVH’s chief industrial officer, Francois Sterin, told DCD. “The goal is that our server needs to be very energy and cost-efficient.”

OVH has around 400,000 servers in operation, and its process is just as software-driven as Supermicro’s, Sterin told us: “We submit a server to a lot of different tests and environmental tests. This allows us to measure how much energy the rack is consuming. The goal is that our server needs to be energy and cost-efficient.”

Minimus with liquid coling
– Green Revolution Cooling

Interesting times ahead

It’s clear that energy efficiency is now top of mind at all levels of the data center stack. More efficient server chips are being managed more effectively, and used more continuously, so they crank out more operations per Watt of supplied power.

At the same time, those servers are being cooled more intelligently. Liquid cooling is ready to reduce the energy demand on cooling systems, while conventional systems are being operated at higher temperatures so less energy is wasted.

We know that Moore’s Law is reaching the end of its reign, however. Chip densities can’t go on increasing indefinitely and delivering the same rate of increase in performance.

If we’ve made cooling as efficient as possible, and chip efficiency begins to level out, where will the next efficiency gains be found? One possibility is in the software running on those processors: how many cycles are wasted due to inefficient code?

Another possibility is in the transmission of power. Between eight and 15 percent of all the power put into the grid is lost in the long-distance high-voltage cables that deliver it. To reduce that would require a shift to a more localized power source, such as a micro-grid at the data center.

The data center sector has great needs, and plenty of ingenuity. The next stage of the efficiency struggle could be even more interesting.