The second law of thermodynamics is the most unbreakable of physical laws, and it is the second law of thermodynamics which requires all data centers to get rid of their heat. Luckily, data center engineers have kept their cool.

Every Joule of energy, pretty much, that goes into a data center is turned eventually into heat. The IT equipment burns electrical energy to do its symbolic work, and the heat it gives off has to go somewhere, or the equipment will overheat.

From the magazine: For more info, or to subscribe go here
From the magazine: For more info, or to subscribe go here – Fay Marney

Cold planning

The aim of data center cooling is to do this job, without using more than a bare minimum of energy, on top of the IT equipment’s power. Ideally, all the electricity goes to the racks. But in practice, plenty of other energy is required, pumping air or other coolants, operating refrigeration systems and more.

At its worst, powering and cooling a data center has been described as running a room full of electric heaters, and a roomful of hairdryers using just as much energy to cool them down. However, as cooling has evolved, the field has developed, with a subtler application of the science of psychrometrics.

There are metrics designed to grade and improve efficiency and a panoply of cooling kit including CRAC, CRAH and economizers.

The flavors of cooling
– Holly Tillier

Most of this sort of equipment dates from the era when data centers were kept too cool, because builders were not prepared to take risks with the tolerance of IT systems.

Vast amounts of energy was spent getting the air temperature in a data center down well below the level that was actually required.

Recently, more focused systems have limited the cooling with in-row systems, and also introduced systems which cool using chillers, instead of refrigeration equipment.

The tolerance of IT equipment is better understood, and the TC 9.9 guidance from ASHRAE (the American Society of Heating Refrigeration and Airconditioning Engineers) has resulted in a trend of increasing temperatures in data centers, so less energy is wasted.

(Article continues below)

The details are complex. Trainers like DCPRO’s Barry Shambrook can spend a three day course outlining the basics of cooling and how to improve it.

“It might seems simple because hot air rises,” he says. “But at the speeds we move air around in a data center, the buoyancy of air has very little effect, so hot air can actually go down.” 

What this means is that data center engineers have to pay attention to the air flow through a data center. A lot of the great improvements have come from simply making this better. Cold air should be directed to the equipment which needs cooling, and the warm air should go to where it will give up its heat.

Openings in raised floors can create recirculation paths, leading to warm air returning past the IT kit, or cold air may move too fast, and bypass the cabinets, or pass through so quickly it doesn’t pick up any heat.

The overall system will have a greater tendency to recirculate air, if the heat change across the server is too small (a small delta-T) and air will tend to bypass the servers if there is a high delta-T.

Electronically commutated (EC) fans have been adopted, which can have their speed adjusted, so only the right amount of cooling is applied, saving masses of energy.

Distributed cooling systems take the cooling to the racks, using active rear-door heat exchangers (ARDHs) attached to the cabinets themselves. This can work with server cooling systems, according to Rich Whitmore, CEO of ARDH-maker Motivair: “The addition of an ARDH actually reduces fan power consumption of the computers inside that rack, more than offsetting the minimal power consumption of the ARDH fan array.”

Other major improvements added recently include the use of economizers - these use water evaporation to cool air in a heat exchanger, so circulating air can be cooled below the outside air temperature.

Adiabatic cooling

Ice Cream
– Holly Tillier

This is “adiabatic cooling,” and Shambrook points out that can’t add an unlimited amount of cooling, because air can only carry a certain amount of moisture. Using it effectively “uses up” a certain amount of water which is expelled in moist air.

Using these techniques, says Shambrook, “the whole of Europe can use free outside-air cooling all year round.”

Malcolm Howe of engineering consultant Cundall, winner of DCD’s Business Leader 2016 award for his work in data center design, agrees: “You don’t have to go to Luleå in Sweden to cool your data center. You can achieve refrigerant-free cooling all year round in London.”

At present in large parts of the world, winter temperatures are so cold that air has to be warmed to get the right conditions to cool a data center. But he warns that the cool climate of Europe may change in the future, due to climate change.

“We might have to add chillers to data centers in borderline areas,” he says. “There will be some locations, maybe in France or Spain, where it’s the difference between not needing heat-lopping equipment now, to needing it in 15 years’ time.”

In other words, data centers, and the world, may get warmer. But data center engineers will always be indisputably cool.

This article was produced with the help of a course from DCPRO presented by the remarkably chill Barry Shambrook.

Mr Chilly
– Holly Tillier