Cookie policy: This site uses cookies (small files stored on your computer) to simplify and improve your experience of this website. Cookies are small text files stored on the device you are using to access this website. For more information on how we use and manage cookies please take a look at our privacy and cookie policies. Some parts of the site may not work properly if you choose not to accept cookies.

sections

Running hotter is not enough

  • Print
  • Share
  • Comment
  • Save

By focussing on the trade offs between mechanical load and electrical losses as a means to ensure energy efficiency, ASHRAE’s new Energy Standard for data centers is paving the way for industry best practices and a standards-based approach to data center design.  

The long awaited Energy Standard for Data centers, number 90.4-2016, from the American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE), establishes the minimum energy efficiency requirements for data centers and includes recommendations on their design, construction, operation and maintenance as well as on the use of on-site and off-site renewable energy.

Schneider StruxureWare data center operation

Schneider StruxureWare 

Source: Schneider

Improving on the first version

ASHRAE’s earlier 90.1 standard applies to energy efficiency in buildings generally and is widely referred to in building regulations. 90.4 is a performance-based design standard and takes into account special considerations affecting data centers, including variations in both mechanical load and electrical losses across different climate zones.

Calculations for both electrical and mechanical components are made and then compared to the maximum allowable values for the appropriate climate zone. Compliance with the standard is achieved when the calculated values do not exceed the values contained in the standard. 

Crucially the new standard does not require a Power Usage Effectiveness (PUE) rating to ensure compliance, although this was considered at an earlier stage of the drafting process. In this, the Society clearly recognizes that energy management in data centers is a more complex problem than can be resolved with a single metric such as PUE, useful though that figure certainly is in guiding energy-efficiency efforts.

A single metric such as PUE for efficiency, or simple strategies such as allowing ambient temperatures to rise, are not sufficient in themselves as a means of reducing overall power consumption  

Recent research detailed in Schneider Electric’s White Paper 221, ‘The Unexpected Impact of Raising Data center Temeperatures’, found that only a full understanding of the cooling and power infrastructure of the data center AND the operational requirements of the IT equipment itself will yield optimum results in terms of efficiency and power consumption.

Running hotter is not enough

Laying undue emphasis on a single metric such as PUE for efficiency, or on simple strategies such as allowing ambient temperatures to rise as a means of reducing overall power consumption are insufficient in themselves. The theory supporting raised temperatures is that cooling equipment can operate in economy mode and will not need to be used as frequently, resulting in a lower energy requirement.

However, experience shows that the results of this strategy have been mixed.

PUE has the advantage of simplicity, in that it represents efficiency as a single metric allowing data center operators to measure the effectiveness of the power and cooling systems over time. However, it is quite limited as it measures only the relative difference between power consumed on IT equipment and the energy consumed on IT and infrastructure combined.

Therefore,  lowering your PUE rating does not necessarily mean that your overall energy consumption has been reduced. In fact, PUE is only a measure of how efficient the physical infrastructure systems are in providing power to the IT load. It says nothing about the total energy being consumed by the data center and is more indicative of a ratio, not a value that indicates a quantity of energy.  

In essence your PUE can improve (i.e., power and cooling systems are more efficient) but your energy use throughout the data center might be the same or higher.

Allowing chillers to operate in economiser mode for a greater part of the year does indeed produce immediate energy savings. However, these are offset by the greater burden placed on other parts of the cooling infrastructure. Air coolers for example, must operate when the chillers are in economiser mode and the fans both in the server racks themselves and in the CRAH (computer room air handlers) units have to work harder, and use more energy, as temperature rises.

Schneider Electric has completed studies of data centers in very different climate regions and the consequences of allowing temperature to rise can vary greatly depending on the location and whether or not a data center is operating at full load.

It depends where you are

When the data center was operating at full load and temperatures were allowed to float between 15.6 and 25.7C, rather than being maintained at the lower level, energy efficiency and total cost of ownership were both improved in Seattle; energy efficiency improved slightly but total costs were unchanged in Chicago; and in Miami, a hotter climate, both efficiency and total costs were worsened.

At half load, energy and total costs were improved in both Chicago and Seattle but they increased again in Miami. One reason for increased overall cost at high temperatures is the effect on the reliability of IT equipment. Servers and storage products tend to have higher rates of failure when operating at higher temperatures.

The team at Schneider Electric’s Data Center Science Center concluded that although operating at higher temperatures can be a useful strategy, care must be taken when implementing it to ensure optimal effects. Necessary steps include the adoption of air-management practices, such as the use of hot or cold-aisle containment systems, to reduce the risk of hot spots; the cooling architecture of a data center should be designed to handle elevated temperatures; and the design should also take into account the business growth plan as data center behaviour may vary as the IT load changes.

In addition, greater collaboration with IT equipment maufacturers is necessary to gain a better understanding of how the operational IT load and how its reliability is impacted at high temperatures.

By allowing greater latitude to data center designers to build their facilities to their specific requirements and by taking into account the differing load and cooling strategies that must be deployed in differing climactic regions, ASHRAE’s new 90.4 standard will encourage innovation in the development of efficient data centers, resulting in more reliable, efficient and cost effective IT services. 

Victor Avelar is director and senior research analyst at Schneider Electric’s data center science center

 

Have your say

Please view our terms and conditions before submitting your comment.

required
required
required
required
  • Print
  • Share
  • Comment
  • Save

Webinars

More link