Cookie policy: This site uses cookies (small files stored on your computer) to simplify and improve your experience of this website. Cookies are small text files stored on the device you are using to access this website. For more information on how we use and manage cookies please take a look at our privacy and cookie policies. Some parts of the site may not work properly if you choose not to accept cookies.


Warming up to summer in the data center

To weather the summer heat better, your data center should warm up, says Intel’s Jeff Klaus 

With the Northern Hemisphere’s summer well under way, data center managers are scrambling to handle droughts and high temperatures. While this summer is expected to have above average temperatures, one of the most critical components to maintaining data center operations is knowing how changes to outside weather and environments effect your cooling plan. DatacenterDynamics talked to Jeff Klaus, general manger for data center solutions, Intel DCM.

Klaus said: “Understanding your data center – inside and out – is critical to making the best decisions during the summer months. Contrary to what most hardware protocols denote, data centers can actually be kept at higher temperatures even during peak workloads. In fact, one of Intel’s customers is planning to bring the data center temperature to 80-82F when most operate at 70 degrees or below. With each degree raised in the data center, a two percent savings is realized on each power bill – a significant year-over-year savings.”

jeff klaus intel dcm

Jeff Klaus, general manger for data center solutions, Intel DCM 

Source: Intel

Our only enemy is tradition

To Intel’s Jeff Klaus there is no argument about running hot in some data centers – it can be done within Intel’s strict guidelines on chip temperature limits. Klaus said: “The only thing stopping most data centers running their facilities hotter is tradition – it’s just not the done thing. However data centers must become more agile and adopt the attitude that they have to start extracting as much information as they can from devices in the data center.

”Most DCIM providers and most data centers have the top, bottom and middle of the rack covered with reasonable expectations that they will get the cooling and airflow data which they need to keep within limits. But they need to be more flexible and use more data sources.”

Asked about the specialist computational fluid dynamics solutions which would tell you exactly where you had a problem or where you could run hotter or had to cool down he said : “The Romonets and the Future Facilities who run computational fluid dynamics to measure air floor and heat spots are useful but still a small part of the market and tend to concentrate on new builds. So they are very good but I think they provide considerably more than most data centers would need.”

Klaus claims that: “For an average, 300 rack 3 mega watt data center, 80 per cent of the market is using a form of DCIM  to control heat flow.  Intel’s DCM can get temperature ratings from storage devices as well as the power distribution units. Even networking devices now provide information on cooling and heating.”

Klaus is a fan of running hot: “Intel is a proponent of high temperature ambiency (HTA). “The biggest enemy of which just happens to be history and habit. But some of largest cloud providers in Taiwan, Alibaba and Rackspace run using HTA. The original premise of the initiative was to run the data center much warmer than usual to save on cooling. One of our customers, a major Japanese online retailer, uses DCM from Intel running hot.”

Klaus says that it is not something Intel believes data centers should ‘dive into’. He said:  ”We start with one room – monitoring what you have – not actively doing anything for the first few days. We find most people take time to come to a decision. Then we add some additional data points, watch for any deviations - our algorithm looks for acceleraction points over a specific time frame – then allows you to increase the temperature. 

The average data center can save twenty per cent costs

Klaus said that the ’ideal size of data centers to try out HTA were ones based on an average 300-rack 3MW facility. ”We increase the temperature  one degree at a time to see how the whole environment reacts. For that average data center a 4 degree temp change is  – very conservative but can save 20 per cent in cooling costs.”

Klaus continued: ”The biggest impact on running hot will be on those focused on thermal management - it’s not about the analytics but more around how do you get the information – more major PDU vendors are becoming intel partners – Schneider Electric sees that software is better than PDUs for ease of use. However software methodologies are cheaper.”

The ability to continue with these new techniques are strengthened by the new generations of intel chips which show outlet temp – which is new since previous chips showed just inlet temperature. 

Readers' comments (2)

  • There are substantive benefits when you use real-time data to track and monitor power usage, heat, etc. This feeds vital data into the DCIM platform – information that DCIM users can leverage to safely run their servers hotter than they may have before!

    Unsuitable or offensive? Report this comment

  • Jeff makes a very good point at the beginning of the article where he states:

    "Understanding your data center – inside and out – is critical to making the best decisions...".

    However, his statement in the middle of the article is not as helpful to the readers:

    "The Romonets and the Future Facilities who run computational fluid dynamics to measure air floor [sic] and heat spots are useful...but I think they provide considerably more than most data centers would need."

    This statement is not helpful because he is comparing CFD to monitoring. but they do completely different things. This is similar to a doctor saying you need a thermometer but a good diet and healthy lifestyle useful, but overrated.

    CFD is a simulation technique based on fundamental physics that predicts IT equipment temperatures and data center capacity that results from a planned physical or operational change. In other words, CFD is for operational planning.

    Monitoring tracks temperatures in real time. In other words, monitoring is for...well monitoring.

    To be clear, both monitoring and CFD are necessary to operate a data center at maximum performance. CFD is for planning. Monitoring is for alerting of problems that can occur if plans change or are not followed.

    A series of short videos about how CFD and monitoring work together to support data center operations can be found at:

    Unsuitable or offensive? Report this comment

  • Thanks Sherman

    Very interesting point.

    Peter Judge

Have your say

Please view our terms and conditions before submitting your comment.



More link