In definitive guidance that it is perfectly safe to run data centers at temperatures up to 27°C (80°F). But large parts of the industry persist in over-cooling their servers, wasting vast amounts of energy and causing unnecessary emissions.

There are signs that this may be changing, but progress has been incredibly slow - and future developments don’t look like speeding things up very much.

Don’t be so cool

When data centers first emerged, operators kept them cool to avoid any chance of overheating. Temperatures were pegged at 22°C (71.6°F), which meant that chillers were working overtime to maintain an unnecessarily cool atmosphere in the server rooms.

In the early 2000s, more energy was spent in the cooling systems than in the IT rack itself, a trend which seemed obviously wrong. The industry began an effort to reduce that imbalance, and created a metric, PUE (Power Usage Effectiveness) to measure progress.

PUE is the total power used in the data center, divided by the power used in the racks - so an “ideal” PUE of 1.0 would mean all power is going to the racks. Findings ways to switch off the air conditioning, and letting temperatures rise, was a major strategy in approaching this goal.

In 2004, ASHRAE (the American Society of Heating, Refrigerating and Air-Conditioning Engineers) recommended an operating temperature range from 20°C to 25°C. In 2008, the society went further, suggesting that temperatures could be raised to 27°C.

Following that, the society issued Revision A1, which raised the limit to 32°C (89.6°F) depending on conditions.

This was not an idle whim. ASHRAE engineers said that higher temperatures would have little effect on the lifetime of components, but would offer significant energy savings.

Figures from the US General Services Administration suggested that data centers could save four percent of their total energy, for every degree they allowed the temperature to climb.

Hyperscale companies are often best placed to pick up advanced technology ideas. They own the building, the cooling systems, and the IT. So if they allow temperatures to climb, then it’s their own equipment that feels the heat.

So it’s no surprise that cloud giants were the first to get on board with raising data center temperatures. Facebook quickly found it could go beyond the ASHRAE guidelines. At its Prineville and Forest City data centers, they raised the server temperatures to 29.4°C, and found no ill effects.

“This will further reduce our environmental impact and allow us to have 45 percent less air-handling hardware than we have in Prineville,” Yael Maguire, then Facebook’s director of engineering, said.

Google went up to 26.6°C, and Joe Kava, then vice president of data centers, said the move was working: “Google runs data centers warmer than most because it helps efficiency.”

Intel went furthest. For ten months in 2008, the chip giant took 900 servers, and ran half of them in a traditionally cooled data center, while the other 450 were given no external cooling. The server temperatures went up to 33.3°C (92°F) at times.

At the end of the ten months, the chip giant compared those servers with another 450 which had been run in a traditional air-conditioned environment. The 450 hot servers had saved some 67 percent of the power budget.

In this higher-temperature test, Intel actually found a measurable increase in failure. Amongst the hot servers, two percent more failed. But that failure rate may have had nothing to do with the temperature - the 450 servers under test also had no air filtration or humidity control, so the small increase in failure rate may have been due to dust and condensation.

Some like it hot

Academics backed up the idea, with support coming from a 2012 paper from the University of Toronto titled Temperature Management in Data Centers: Why Some (Might) Like It Hot.

“Our results indicate that, all things considered, the effect of temperature on hardware reliability is weaker than commonly thought,” the Canadian academics conclude. “Increasing data center temperatures creates the potential for large energy savings and reductions in carbon emissions.”

At the same time, server makers responded to ASHRAE’s guidelines, and confirmed that these new higher temperatures were acceptable without breaking equipment warranties.

Given that weight of support, you might have expected data center temperatures to rise dramatically across the industry - and you can still find commentary from 2011, which predicts a rapid increase in cold aisle temperatures.

However, look around for recommended data center temperatures today, and figures of 22°C and 25°C are still widely quoted.

This reluctance to change is widely put down to the industry’s reputation for conservatism, although there are some influential voices raised against the consensus that higher temperatures are automatically better (see Box).

Equinix makes a cautious move

All of which makes a recent announcement from Equinix very interesting. On some measures, Equinix is the world’s largest colocation player, housing a huge chunk of the servers which are not either in on-premises data centers on in the cloud.

In December, Equinix announced that it would “adjust the thermostat of its colocation data centers, letting them run warmer, to reduce the amount of energy spent cooling them down unnecessarily.”

“With this new initiative, we can intelligently adjust the thermostat in our data centers in the same way that consumers do in their homes,” said Raouf Abdel, EVP of global operations for Equinix.

Equinix’s announcement features congratulatory quotes from analysts and vendors.

Rob Brothers, program vice president, data center services, at analyst firm IDC explains that “most data centers … are unnecessarily cooler than required,"

Brothers goes on to say that the announcement will see Equinix “play a key role in driving change in the industry and help shape the overall sustainability story we all need to participate in."

The announcement will "change the way we think about operating temperatures within data center environments,” he says.

Which really does oversell the announcement somewhat. All Equinix has promised to do is to make an attempt to push temperatures up towards 27°C - the target which ASHRAE set 14 years ago, and which it already recommends can be exceeded.

No Equinix data centers will get warmer straight away, either. The announcement will have no immediate impact on any existing customers in any Equinix data centers. Instead, customers will be notified at some unspecified time in the future, when Equinix is planning to adjust the thermostat at the site where their equipment is hosted.

"Starting immediately, Equinix will begin to define a multi-year global roadmap for thermal operations within its data centers aimed at achieving significantly more efficient cooling and decreased carbon impacts," says the press release.

And in response to a question from DCD, Equinix supplied the following statement: "There is no immediate impact on our general client base, as we expect this change to take place over several years. Equinix will work to ensure all clients receive ample notification of the planned change to their specific deployment site."

Customers like it cool

Reading between the lines, it is obvious that Equinix is facing pushback from its customers, who are ignoring the vast weight of evidence that higher temperatures are safe, and are unwilling to budge from the traditional 22°C temperature which has been the norm.

Equinix pushes the idea of increased temperatures as a way for its customers to meet the goal of reducing Scope 3 emissions, the CO2 equivalent emitted from activity in their supply chain.

For colocation customers, the energy used in their colo provider’s facility is part of their Scope 3 emissions, and there are moves to encourage all companies to cut their Scope 3 emissions to reach net-zero goals.

Revealingly, Equinix does not provide any supporting quotes at all from customers eager to have their servers hosted at a higher temperature.

For Equinix, the emissions for electricity used in its cooling systems are part of its Scope 2 emissions, which it has promised to reduce. Increasing the temperature will be a major step towards achieving that goal.

"Our cooling systems account for approximately 25 percent of our total energy usage globally," said Abdel. "Once rolled out across our current global data center footprint, we anticipate energy efficiency improvements of as much as 10 percent in various locations."

Equinix is in a difficult position. It can’t increase the temperature without risking the displeasure of its customers, who might refuse to allow the increase or go elsewhere.

It’s a move that needs to be made, and Equinix deserves support for setting the goal. But the cautious nature of the announcement makes it clear that this could be an uphill battle.

However, Equinix clearly believes that future net-zero regulations will push customers in the direction it wants to be allowed to go.

"Equinix is committed to understanding how these changes will affect our customers and we will work together to find a mutually beneficial path toward a more sustainable future,” says the statement from the company.

“As global sustainability requirements for data center operations become more stringent, our customers and partners will depend on Equinix to continue leading efforts that help them achieve their sustainability goals."