Let’s start off by understanding what exactly evaporative cooling is. This term describes a system that takes advantage of the reduction in temperature resulting from the evaporation of (in our case) water in to air. It’s age old physics, where you ask for a change of state to go from a liquid to a gas. For a molecule to make that change of state, it has to use energy and the only energy it has, is latent heat. So what it does is use the latent heat in the air to change its state into a gas. As a result, it gets much cooler.

This is also the process that’s used in the refrigeration cycle. For example, with a fridge, you can hear the system turning on and off, as the compression of the coolant goes back and forth from a gas back to a liquid. The compressed liquid is then allowed to expand again, and as it does so it gives up its latent heat to make the change of state to gas. This process is then used to remove heat from the inside of the fridge.

Custodian colo
Building Custodian’s colo space  – Custodian

Benefits of evaporative cooling

For us as a data center, it does several things. It gives us automatic inbuilt efficient humidity control during the winter season, which is where you get the real ESD (electrostatic discharge) issues. During summer you’ll be reasonably humid; very rarely will you have days where you look at the temperature charts and it’ll duck below 30 percent. Most of the time in the UK we sit in the 40-60 percent band and above during summer. Whereas in winter, you can get cold, dry days, where it’ll plummet to next to well under 20 percent once the air has been heated up from zero to the 21 supply required within the data floors.

The big thing is efficiency. Where you are choosing to use alternatives like fresh air for cooling (compared to DX), obviously it is ambient dependent. So, for example, if it’s 27 degrees outside, then that’s the temperature you are going to provide internally, plus you’re going to pick up a degree or two on the way in. Yes, there is a direct correlation with adiabatic cooling and ambient temperature, but for us, the big thing is the lack of requirement for electrical transformer capacity. The liquid gold for any data center is the transformer (and backup generator) capacity, which is a massively expensive resource. Especially when you consider that it could be unused for over 95 percent of the year.

When you are designing a system you have many considerations but one of them is peak demand. It’s all very well talking about the efficiency of adiabatic vs the efficiency of a chiller, but from a systems design, you must understand what the peak requirements are.

So for a 330kW chiller, you need somewhere in the region of 200 – 250 amps per phase reserved for its compressors and pumps. Now, unless you hit 37+ degrees ambient, you won’t hit the peak electrical demand from the chillers. Typically, in this area of the world, we spend 95 percent of our time below 21 degrees according to the MET Office. No matter how the weather turns out, you still have to have all of the available amps required for max cooling in reserve. You can’t use an amp: it has to be ‘held to one side’. For adiabatic, the equivalent is probably maximum electrical requirement is 15 amps. So your additional transformer capacity becomes available for IT load and you lose none of your cooling capacity.

An example

When making the decision to move over to this type of cooling it was clear to us that we needed to use the best possible designs and equipment. So we supply full RO (reverse osmosis) water to keep the entire system to keep it as clean as possible, which means that the droplet separators and spray nozzles all stay mineral free. To complement this, we have a very extensive workup of cross-mains fed RO plants, so we always have failovers. On that front, we’re also lucky as a site as we have diverse main grid feeds for both electrical and water mains.

It’s all very well talking about the efficiency of adiabatic vs the efficiency of a chiller, but you must understand what the peak requirements are

This approach is effectively 2N resilient, with completely separate duplicate systems protecting against failure should one element be affected by either an outage or technical difficulty. In addition, we also store large quantities of pre-RO and post-RO water in multiple storage tanks on site.

As opposed to indirect injection evaporative cooling used by other data centers, we offer direct injection evaporative cooling. Indirect describes the separation of internal and external air volumes, separated with a heat exchanger. So it describes what is a closed-loop system.

In terms of efficiency, a heat exchange unit is clearly less efficient because it has to ‘trade heat’ from either side effectively acting as a radiator. You’ve got to get cooler air from one side to take heat from the other side and you’re going to suffer by 3 to 4 degrees in the crossover process.

Other critical design points for us were using a system that operated at extremely high pressure, ensuring the minimum droplet size. Also, evaporative chamber length and air speed are crucial elements when asking for the upper end of evaporative efficiency in our data center at Custodian.

Rowland Kinch is CEO of British networking and colocation company Custodian Data Centres. He recently spoke in a DCD webinar on humidity control and energy efficiency.