Cookie policy: This site uses cookies (small files stored on your computer) to simplify and improve your experience of this website. Cookies are small text files stored on the device you are using to access this website. For more information on how we use and manage cookies please take a look at our privacy and cookie policies. Some parts of the site may not work properly if you choose not to accept cookies.


Smart cooling, better power

  • Print
  • Share
  • Comment
  • Save

There are great new ways to power and cool your data center, says Peter Judge. But sometimes they cancel each other out

The industry has made plenty of advances in power and cooling, but data centers are a pretty conservative sector. Somewhat ironically, those advances can play off against each other, while best practise evolves slowly and by consensus.

“There’s innovation at the high end in web scale facilities,” says DCD Intelligence analyst Adi Kishore. “But small and mid-sized facilities don’t seem to be particularly looking for new technology, or overly concerned with cooling efficiency.”

There’s a reason for that: if a server is making thousands of dollars with the services it provides, then saving a few dollars on its power may not be a priority, especially if those savings come with higher risk or investment. As Professor Ian Bitterlin points out, 1kWh of energy costs pennies (around £0.17 in the UK). Take the value produced by a data center and divide by its electricity bill and, Bitterlin calculates, that data center will generate £130 from that amount of energy, or more if it is in the finance sector. When the financial return on electrical power is a thousand times its cost, then minimizing the amount spent is not the top priority.

Towers of power

Ice cube

Source: Thinkstock

Despite this fact, there is a solid level of good practise in the industry, and peer pressure is having its effect. No data center is going to launch with a published PUE of more than 1.2, and any big player with a public presence now has to have a policy about energy efficiency and a move to renewable energy sources.

Like we said, data center providers are conservative. They want technologies which are tried and tested - and have the good fortune to be in an industry where best practise is well known, well shared and well set out.

Power and cooling are intimately linked, because power distribution is all about getting electrical energy into the racks of equipment, where it does IT work. All the energy consumed ultimately turns into heat, and cooling is about getting that energy back out of the racks.

The big picture is about making that happen efficiently, to use as little energy as possible. Here bodies like ASHRAE and the Green Grid are setting out standards for efficiency (see ’ASHRAE’s efficiency drive’ below).

There’s another aspect of that big picture, which is to consider that no matter how efficiently data centers operate, they will still need electricity. It’s possible to reduce the emissions of a data center by sourcing renewable power (see ‘Getting real on renewables’ below).

$20bn - predicted value of data center power services in 2020 (Research and Markets)

At the nitty-gritty level, power distribution could be changing. Traditionally, within a data center, power is changed between AC and DC, with voltage stepped up and down. UPS batteries must be charged to keep the data center going for a short while in an emergency, a lot of equipment needs AC, and ultimately the individual IT components in the rack are powered by DC.

Stripped down designs created through groups like the Open Compute Foundation are suggesting ways to simplify that, using DC, and distributing smaller rechargable batteries to the servers, to remove wasteful changes in the power distribution system.

Cool your jets

Meanwhile, what are the prospects of a big change in the way systems are cooled? The big development that’s always proposed is a move to liquid cooling. Fundamentally, liquid cooling is great because it removes heat passively, with very little energy input, and the waste heat emerges in a highly concentrated and therefore useable form: water at more than 45C can be used to heat buildings or warm greenhouses to grow plants.

That’s a generalization of course, because there are numerous kinds of liquid cooling, from low-impact systems that provide extra cooling to hot racks by running water through the doors, to systems which use more or less radical redesigns that immerse electronics in coolant, to futuristic scenarios running a two-phase coolant through capillaries directly on the surface of hot chips.

That sounds confusing, but some branches of the technology are well established. Cabinet door cooling is an interesting case in point. Running water through the doors of racks is a way to get cooling to where it is needed. It’s been proposed for several years, and by 2013, solutions were well developed, but with some variations. Opticool, for instance, went for a system that pumps refrigerant - the same refrigerant used in big chillers - instead of water, to the rack doors.

Ice cube

This distributes the cooling to where it is needed. It can chill hotspots, or take over the entire data center’s cooling needs, doing away with the need for raised floors and air handling in the process.

But liquid cooling has been, so to speak, treading water. It’s used in high-performance computing (HPC) only. That’s not a market to be ignored, but it doesn’t have the giant opportunities of enterprise or webscale computing.

Those big markets obstinately keep cooling with air. They pack in more chips, but then - thanks in part to those power efficiency improvements - they waste less heat, so the power density (and the temperature) never gets to the point where it’s economically necessary to turn on the liquid cooling taps.

This could change. Green Revolution Cooling, a company with a pretty specialised tank immersion system, recently came out with a range of servers that seem to be aiming towards a more general market, even using concepts similar to the Open Compute Project. Other firms in the same arena, like Britain’s Iceotope with its Petagen range, are productizing techniques that used to be bespoke.

Meanwhile, Lenovo is selling water cooling for HPC. The solution was commissioned by the University of Birmingham, specifically to expand an iDataPlex installation which was pushing the limits of the power density the University could handle.

So liquid cooling is starting to become available in more consumable forms, and there’s signs of a demand for it.

As ever, though, it’s a race between power and cooling. If the power distribution system make a leap to greater efficiency, that reduces the amount of heat that needs to be removed, and dampens down the need for liquid cooling.

ASHRAE’s efficiency drive

The American Society of Heating, Refrigeration and Air-Conditioning Engineers (ASHRAE) is a recognized authority on the efficiency of buildings, and its best practices are well-regarded. However, its effort to create a standard for data centers has been something of a struggle.

The proposed 90.4P standard for data center efficiency could be referenced in building codes throughout the US and beyond, and is due for publication in summer 2016. As we go to press, the final round of comments has been invited on the draft standard, and there is still some controversy.

ASHRAE originally specified different levels of PUE, according to where, and in what climate, a data center would be located. The reference to PUE, the metric created by the Green Grid, was removed after criticism from the industry said this wasn’t a good use of the PUE metric.

However, the requirement of different levels of efficiency remains, and some have warned that the resulting standard may increase bureaucracy and delays in building data centers.

As we go to press, ASHRAE has opened the draft standard to what it hopes wil be a final round of consultation, before it publishes a final version of the standard at its technical conference in late June in St Louis.

However, there is real criticism of the concept and its execution. Professor Ian Bitterlin says the standard is very US-centric, including the use of Imperial, non-metric units. “The chances of it coming into any use in Europe are, sadly, very remote,” he says.

Others have commented that ASHRAE may be making a mistake on the the principle involved: by applying formal standards to a fast moving industry, it may be limiting the application of new ideas.

Getting real on renewables

It’s clearly in a company’s interest to reduce the amount of energy it uses, as this reduces its costs, and also reduces its environmental impact. However, the world as a whole needs to reduce its dependence on fossil fuels, so a move to renewable energy would contribute to the greater good.

Renewable energy tends to be more expensive, and come from sources like wind and solar which are intermittent, while data centers need continuous power.

Large cloud providers such as Microsoft, Facebook, Amazon and Google have promised to go 100 percent renewable, on various timescales, but they have two advantages. They can build in places like Sweden, with continuous renewable energy from hydroelectric sources. And they can also negotiate large “power purchase agreements” (PPAs), paying for renewable power which will offset the non-renewable power they actually burn.

In recent months, content delivery network Akamai has promised to go 50 percent renewable. That’s a smaller commitment, but a bigger deal, because Akamai has an “edge” network, close to users. It rents space in colocation facilities, and cannot control its landlord’s power usage.

Microsoft has promised to directly use more renewable energy such as biogas and fuel cells. There’s not much detail, but it could help  to retire fossil fuel plants.

Have your say

Please view our terms and conditions before submitting your comment.

  • Print
  • Share
  • Comment
  • Save


More link