In December 2020, when Japanese giant NTT opened a data center in London, one big item of equipment was missing. Data center managers from a few years ago would have been surprised to see the 32MW building in Dagenham has no air conditioning units.

In the last few years, the old consensus on how to cool a data center has gone. And there are further changes on the way.

"The latest technology removes the need for compressors and refrigerants," said Steve Campbell-Ferguson, SVP design and engineering EMEA for NTT Global Data Centres, at the virtual launch event of the Dagenham facility.

No mechanical cooling

This was not the first data centers to be built this way, by a long chalk. In 2015, Digital Realty claimed that a 6MW London facility it built for Rackspace was the first in the UK to have no mechanical cooling.

And there are simple reasons why operators should want to move in that direction. Data center designers want to reduce the amount of energy spent removing heat from the IT load in the building. Before energy conservation was a big concern, data centers were built with air conditioning units which could consume as much energy as the IT racks themselves.

In the 21st century, this “wasted” energy became a key concern, and builders aim to reduce it as close to zero as possible, driving towards a PUE figure of 1.0. Replacing air conditioning units with more passive cooling techniques is one way of doing that, and can reduce the energy used in cooling by around 80 percent: NTT promised a PUE of 1.2 this year, while Rackspace claimed 1.15 five years ago.

The change does not just reduce energy consumption: it also reduces the amount of embodied energy and materials in the building, and also cuts the use of refrigerants which are themselves potent greenhouse gases.

This option doesn’t work everywhere in the world: in warm or humid climates, there will be a large number of days in the year when chillers are needed

But there’s a principle here. At the start of the century, it was assumed that there was one way to keep a data center cool: mechanical chillers driving cold air through contained racks of equipment. Now, that assumption is broken down.

Along with the drive to make data centers more efficient, there’s another reason: data centers are no longer uniform. There are several different kinds, and each one has different demands.

Colocation spaces, as we described, have a well-established path to reducing or removing the use of mechanical cooling, but there are other steps they may need to take.

There are also newer classes of data center space, with different needs. Let’s look at a few of these.

High Performance Computing (HPC)

Supercomputers used to be rare beasts, but now there’s a broader need for high performance computing, and this kind of capacity is appearing in existing data centers. It’s also pushing up the density of IT, and the amount of heat it generates, sometimes to more than 100kW per rack.

With efficiency in mind, data center operators don’t want to over-cool their facilities, so there simply may not be enough cooling capacity to add several racks of this sort of capacity.

Adding HPC capacity can mean putting in extra cooling for specific racks, perhaps with distributed cooling systems, that place a cooling system such as a rear-door heat exchanger in specific racks that need more cooling.

Alternatively, an HPC system can be built with a separate cooling system, perhaps using circulating fluid or an immersion tank, such as those provided by Submer, Asperitas, or GRC.

cooling photo.JPG
– 2CRSI

Hyperscale

Giant facilities run by the likes of Facebook, Amazon, and Google have several benefits over the rest of the world. They are large and uniform, often running a single application on standard hardware across a floorplan as big as a football field.

The hyperscalers push some boundaries, including the temperatures in their data centers. With the ability to control every aspect of the application and the hardware that runs it, they can increase the operating temperature - and that means reducing the need for cooling.

Hyperscalers Microsoft and Google were among the first to go chiller-free. In 2009, Google opened its first facility with no mechanical cooling, in Saint-Ghislain, Belgium. In the same year, Microsoft did the same thing in Dublin.

Giant data centers are cooled with slow-moving air, sometimes given an extra chill using evaporation. It has turned out that the least energy-hungry way to produce that kind of flow is with a wall of large slow-turning fans.

The “fan-wall” has become a standard feature of giant facilities, and one of its benefits is that it can be expanded alongside the IT. Each new aisle of racks needs another couple of fan units in the wall, so the space in a building can be filled incrementally.

Aligned Energy builds wholesale data centers, and makes its own Delta3 cooling system, a fan-wall which CEO Andrew Schaap describes as a “cooling array” to avoid trademark issues It supports up to 50kW per rack without wasting any cooling capacity, and scales up.

“No one starts out with 800W per square foot,” Schaap told DCD in 2020. “I can start a customer at a lower density, say 100W per square foot, and in two years, they can densify in the same footprint without any disruptions."

Cooling specialist Stulz has produced a fan-wall system called CyberWall, while Facebook developed one in association with specialist Nortek.

Edge

Distributed applications like the Internet of Things can demand fast response from services, and that’s led to the proposal of Edge data centers - micro-facilities which are placed close to the source of data to provide low-latency (quick) responses.

Edge is still emerging, and there will be a wide variety of Edge facilities, including shipping-container sized installations, perhaps located at cell towers, closets or server rooms in existing buildings, or small enclosures at the level of street furniture.

There’s a common thread here - putting IT into spaces it wasn’t designed for. And maintaining the temperature in all these spaces will be a big ask.

Some of this will be cooled traditionally. Vendors like Vertiv and Schneider have micro data centers in containers which include their own built-in air conditioning.

Other Edge capacity will be in rooms within buildings, which already have their own cooling systems. These server rooms and closets may simply have an AC duct connected to the building’s existing cooling system - and this may not be enough.

“Imagine a traditional office closet,” said Vertiv’s Glenn Wishnew in a recent webcast. “That’s never been designed for an IT heatload.” Office space air conditioning is typically designed to deal with 5W per sq ft, while data center equipment needs around 200W/sq ft.

Adding cooling infrastructure to this Edge capacity may be difficult. If the equipment is in an open office environment, noisy fans and aircon may be out of the question.

That’s led some to predict that liquid cooling may be a good fit for Edge capacity. It’s quiet, and it’s independent from the surrounding environment, so it won’t make demands on the building or annoy the occupants.

Immersion systems cocoon equipment safely away from the outside, so there’s no need to regulate outside air and humidity. That’s led to vendors launching pre-built systems such as Submer’s MicroPod, which puts 6kW of IT into a box one meter high.

The problem to get over, of course, is the lack of experience in using such systems. Edge capacity will be distributed and located in places where it’s hard to get tech support quickly.

Edge operators won’t install any system which isn’t thoroughly proven and tested in the field - because every site visit will cost hundreds of dollars.

However, liquid cooling should ultimately be a good fit for Edge, and even provide higher reliability than air-cooling. As David Craig of another immersion vendor, Iceotope, points out, these systems have no moving parts: “Immersive cooling technology removes the need for intrusive maintenance and its related downtime.”