The industry has made plenty of advances in power and cooling, but data centers are a pretty conservative sector. Somewhat ironically, those advances can play off against each other, while best practise evolves slowly and by consensus.

“There’s innovation at the high end in web scale facilities,” says DCD Intelligence analyst Adi Kishore. “But small and mid-sized facilities don’t seem to be particularly looking for new technology, or overly concerned with cooling efficiency.”

There’s a reason for that: if a server is making thousands of dollars with the services it provides, then saving a few dollars on its power may not be a priority, especially if those savings come with higher risk or investment. As Professor Ian Bitterlin points out, 1kWh of energy costs pennies (around £0.17 in the UK). Take the value produced by a data center and divide by its electricity bill and, Bitterlin calculates, that data center will generate £130 from that amount of energy, or more if it is in the finance sector. When the financial return on electrical power is a thousand times its cost, then minimizing the amount spent is not the top priority.

Towers of power

Ice cube
– Thinkstock

Despite this fact, there is a solid level of good practise in the industry, and peer pressure is having its effect. No data center is going to launch with a published PUE of more than 1.2, and any big player with a public presence now has to have a policy about energy efficiency and a move to renewable energy sources.

Like we said, data center providers are conservative. They want technologies which are tried and tested - and have the good fortune to be in an industry where best practise is well known, well shared and well set out.

Power and cooling are intimately linked, because power distribution is all about getting electrical energy into the racks of equipment, where it does IT work. All the energy consumed ultimately turns into heat, and cooling is about getting that energy back out of the racks.

The big picture is about making that happen efficiently, to use as little energy as possible. Here bodies like ASHRAE and the Green Grid are setting out standards for efficiency (see ’ASHRAE’s efficiency drive’ below).

There’s another aspect of that big picture, which is to consider that no matter how efficiently data centers operate, they will still need electricity. It’s possible to reduce the emissions of a data center by sourcing renewable power (see ‘Getting real on renewables’ below).

$20bn - predicted value of data center power services in 2020 (Research and Markets)

At the nitty-gritty level, power distribution could be changing. Traditionally, within a data center, power is changed between AC and DC, with voltage stepped up and down. UPS batteries must be charged to keep the data center going for a short while in an emergency, a lot of equipment needs AC, and ultimately the individual IT components in the rack are powered by DC.

Stripped down designs created through groups like the Open Compute Foundation are suggesting ways to simplify that, using DC, and distributing smaller rechargable batteries to the servers, to remove wasteful changes in the power distribution system.

Cool your jets

Meanwhile, what are the prospects of a big change in the way systems are cooled? The big development that’s always proposed is a move to liquid cooling. Fundamentally, liquid cooling is great because it removes heat passively, with very little energy input, and the waste heat emerges in a highly concentrated and therefore useable form: water at more than 45C can be used to heat buildings or warm greenhouses to grow plants.

That’s a generalization of course, because there are numerous kinds of liquid cooling, from low-impact systems that provide extra cooling to hot racks by running water through the doors, to systems which use more or less radical redesigns that immerse electronics in coolant, to futuristic scenarios running a two-phase coolant through capillaries directly on the surface of hot chips.

That sounds confusing, but some branches of the technology are well established. Cabinet door cooling is an interesting case in point. Running water through the doors of racks is a way to get cooling to where it is needed. It’s been proposed for several years, and by 2013, solutions were well developed, but with some variations. Opticool, for instance, went for a system that pumps refrigerant - the same refrigerant used in big chillers - instead of water, to the rack doors.

Ice cube
– Thinkstock

This distributes the cooling to where it is needed. It can chill hotspots, or take over the entire data center’s cooling needs, doing away with the need for raised floors and air handling in the process.

But liquid cooling has been, so to speak, treading water. It’s used in high-performance computing (HPC) only. That’s not a market to be ignored, but it doesn’t have the giant opportunities of enterprise or webscale computing.

Those big markets obstinately keep cooling with air. They pack in more chips, but then - thanks in part to those power efficiency improvements - they waste less heat, so the power density (and the temperature) never gets to the point where it’s economically necessary to turn on the liquid cooling taps.

This could change. Green Revolution Cooling, a company with a pretty specialised tank immersion system, recently came out with a range of servers that seem to be aiming towards a more general market, even using concepts similar to the Open Compute Project. Other firms in the same arena, like Britain’s Iceotope with its Petagen range, are productizing techniques that used to be bespoke.

Meanwhile, Lenovo is selling water cooling for HPC. The solution was commissioned by the University of Birmingham, specifically to expand an iDataPlex installation which was pushing the limits of the power density the University could handle.

So liquid cooling is starting to become available in more consumable forms, and there’s signs of a demand for it.

As ever, though, it’s a race between power and cooling. If the power distribution system make a leap to greater efficiency, that reduces the amount of heat that needs to be removed, and dampens down the need for liquid cooling.