It’s not exactly a secret that data centers are getting bigger and bigger – and, therefore, the challenge of cooling server halls and the ever-higher density racks they contain is becoming more acute.

AdobeStock_471667587_120x80_bought by Airedale
– Airedale

On the one hand, cooling systems have been scaled up accordingly, as you would expect, but on the other, this also presents new technical and operational challenges. So where is the new technology that can help data center operators better manage their complex. bigger-than-ever cooling systems?

“Cooling systems are getting larger and larger. At Airedale, we’ve gone from one-megawatt suites, just five or so years ago, to 10MW halls in the last two or three years. The cooling system that's coupled to that has been upscaled accordingly,” says Reece Thomas, controls director at cooling specialists Airedale.

“During the early growth spurt of the data centre industry we adopted the chiller sequencer, which allowed us to manage between four and eight chillers. But those chillers were only 500 kilowatts each, and now they’re 1.8MW each. And where there’s so much more cooling, there’s also the greater potential for oscillation in the control, not meeting SLAs at set points, and other technical issues as you scale up.”

That’s why, he continues, Airedale developed the Cooling System Optimizer – to help overcome all the various challenges of data center cooling as it has adapted to meet the hyperscale challenge.

The aim is to bridge the gap between the control teams whose job it is to ensure the cooling systems operate correctly, and the teams delivering monitoring systems, such as the building management system (BMS).

Indeed, it was developed partly because Airedale enjoys unique insights into the challenges data center operators are now grappling with as a provider of both cooling systems and BMS/PMS software, for both networked units and full systems.

Switch and save

The Cooling System Optimizer is part of IQity, Airedale’s IOT technology framework. It sits between the monitoring layer and the cooling units themselves, monitoring data from the units and from sensors placed at critical points in the cooling system.

That data the Optimizer uses to make operational decisions includes live water flows/volumes through the chillers and the ambient temperature of the server hall. The Optimizer can make adjustments on the fly, based on the live data, which is also exported in a standard format to the BMS, so that more in-depth analysis can be conducted.

“Every site is different, but the Optimizer can adapt to meet the specific requirements of each and every site. It’s not just controlling the individual units, it’s controlling the system as a whole, redundantly, indoors and outdoors. It links temperatures recorded in the white space all the way back to optimize the chiller.

Where Airedale's Cooling System Optimiser sits in the data center control architecture – Airedale

“It’s like one big ring. So computer room air handling (CRAH) units or fan walls are always sending data to the chiller system to ensure that the water temperature, flow, and pressure are correct. The system looks after everything – it’s all-encompassing. All the contractor needs to do is size the pipes to make sure there’s enough space for heat rejection on both sides of the ring run,” says Thomas.

Moreover, the monitoring of outside temperatures means that the Airedale Cooling System Optimizer is able to switch chillers to free cooling mode, when appropriate, helping to save even more money and make deeper cuts to carbon emissions, too.

And the savings that can be generated by the Optimizer are not small, either.

In research conducted on a reference 10MW data center in London, UK Airedale found that cost savings of around 44 percent were attainable, depending on site loading. With data centers estimated to account for three percent of global power consumption, and cooling estimated to account for up to 40 percent of that, the savings that can therefore be generated are significant.

The research compared power consumption of a real-life data center scenario, operating under typical conditions with an already highly efficient, low-GWP Airedale chiller, both at full and part load. The scenario was then replicated using an Airedale DCS chiller with increased free cooling coil rows and larger fans, an optimised CRAH unit with optimised chilled water coil, as well as the implementation of the Cooling System Optimizer to manage the whole system, of course.

Airedale’s research paper goes into much more technical detail covering the assumptions, methodology, and results, but the headline figures are eye-opening.

At full load, the enhanced cooling technology deployed cut power consumption by 36 percent, generating a cash saving of more than £700,000 ($850,000) and a carbon saving of almost 790 tons of CO2. At part-load, the power savings were even greater – 44 percent – with cost savings of just under £500,000 ($610,000) and carbon savings weighing in at just under 550 tons of CO2.

Already, Airedale has more than ten sites covering around 90MW running the Optimizer, including a CyrusOne facility in Europe, benefiting from these kind of savings, with considerably more in the pipeline.

Every Airedale chiller produced today is factory-fitted with a dedicated controller so that it can be run and fine-tuned by the Cooling System Optimizer technology.

How cooling became cool

The Optimizer is just the latest cooling technology to be developed over the past decade of rapid change.

“At Airedale, we are involved in the design of the entire system. Even in the recent past – just five or six years ago – sites would be fitted with a fixed primary and variable secondary system [for redundancy]. But we started to see systems with a variable primary and variable secondary cooling system,” says Thomas.

“The Cooling System Optimizer was, in fact, born out of a problem we’d had to tackle with a sequencer. So we said, ‘Right, let’s take it back and make it as redundant as possible’,” he adds.

Moreover, cooling systems installed today are subject to far more intensive investigation before they’re even put to work.

“Every site we go to now there’s a consultant, a contractor’s consultant, the commissioning company for the end user, the commissioning company for the contractor. Testing has gone through the roof – it’s totally different from what it was just five, six or seven years ago – and you’ve got to satisfy the demands of all these different people.

image002_landscape_ready for publication
– Airedale

“Therefore, the control system has to be absolutely bullet-proof. It has to work perfectly. And, at the same time, monitoring systems have improved because there are more eyes on the water temperature and other metrics, making sure they don’t nudge outside of the SLA’s parameters. Everything’s become more intensive,” says Thomas.

In the past, perhaps, with imperfect information and SLAs that couldn’t possibly be breached, data center engineers would simply have cranked the cooling system up to 11. Yet with so much now invested in sprawling data center estates on the one hand, and operators’ sustainability commitments, on the other, much smarter management of cooling can’t be ignored.

But Airedale’s Cooling System Optimizer can help data center operators achieve both financial savings and cut carbon emissions at the same time, pleasing everyone from the floor of the optimized data hall to the comfy chairs of the C-suite.

To find out more about Airedale’s Cooling System Optimizer, and the savings it can make in your data center(s), please check out the ‘optimized’ Airedale website