These recent heat waves aren’t one-offs. Anyone experiencing the summer of 2022 can see that climate change is pushing the mercury higher. As it does, the challenge of keeping data centers cool becomes more complex, expensive and power intensive. The electricity requirement to do this is impacting other infrastructure, as seen in London recently with new house building being affected by the high power requirements of data centers. With data volumes growing, this need is only going to expand. 

For those of us in the data storage and processing world, keeping cool is not a new challenge. Any data center manager will also be familiar with the need to balance efficient power consumption and consistent temperatures with answering the needs of the business. While there’s plenty of innovation when it comes to cooling equipment, these can be hard to retrofit into existing data centers.

Thankfully, there are some pragmatic, sustainable strategies to explore as part of a holistic solution. 

Keeping cooler air circulating 

Good air conditioning is a requirement for modern data centers, but many of these were never designed to operate in the type of conditions we now experience during heatwaves. It’s troubling to read when facilities need to resort to hosepipes to make sure that Heating, Ventilation and Air Conditioning (HVAC) systems can cope. 

Newer data centers are built with a focus on improving Power Usage Effectiveness (PUE), since a lower PUE directly results in reduced cooling costs. For those who have the option, building data centers in colder climes can do a lot to reduce the cooling burden. Of course, for many, this isn’t a practical option.

A holistic approach should take into account not just how to improve the data center power and cooling capacity itself, but also whether this capacity is being consumed efficiently. Many IT organizations are now focusing on the content of their data centers and even setting targets in terms of reducing their power consumption.

Power reduction suggestions

Here are three strategies that IT organizations should be considering. When combined, they can help to reduce the power and cooling requirements for data centers: 

  • More efficient solutions
    This is stating the obvious: every piece of hardware uses energy and generates heat. Reducing the amount of power used means less heat to remove. Organizations should look for hardware that can do more for them in a smaller power footprint.
    Increasingly, IT organizations are considering power efficiency when selecting what goes in their data center. In the world of data storage and processing for example, key metrics now being evaluated include capacity per watt and performance per watt. With storage systems and components representing a significant portion of the hardware in data centers, upgrading to more efficient solutions can significantly reduce their overall power and cooling footprint.
  • Disaggregated architectures
    Now we turn to direct attached storage and hyperconverged infrastructure (HCI). Many vendors talk about the efficiencies of combining compute and storage subsystems. That’s absolutely fair, but that efficiency is mainly to do with fast deployments and unified management tools. It doesn’t necessarily mean energy efficiency. In fact, there’s quite a bit of wasted power from direct attached storage and hyperconverged systems at scale.  
    For one thing, compute and storage needs rarely grow at the same rate. Some organizations end up over-provisioning the compute side of the equation in order to cater for their growing storage requirements. Occasionally the same thing happens from a storage point of view, and in either scenario, a lot of power is being wasted. If compute and storage pools are separated, it’s easier to reduce the total number of infrastructure components needed—and therefore cut the power and cooling requirements too. Additionally, direct attached storage and hyperconverged solutions tend to create silos of infrastructure. Unused capacity in a cluster is very difficult to make available to other clusters and this leads to even more over-provisioning and waste of resources.

  • Just-in-time provisioning
    The legacy approach of provisioning based on the requirements of the next three to five years is not fit for purpose anymore. This approach means organizations end up running far more infrastructure than they immediately need. Instead, modern on-demand consumption models and automated deployments let companies scale the infrastructure in their data centers easily over time. Infrastructure is provisioned just-in-time instead of just-in-case, avoiding the need to power and cool components that won’t be needed for months or even years.  

Most of the time, keeping data centers cool depends on reliable air conditioning and solid contingency planning. But in every facility, each fraction of a degree that the temperature rises is also a fractional increase in the stress on equipment. If temperatures do spike, then it pays to be running hardware that’s more durable and reliable. Flash storage, for instance, is typically far better able to handle increases in temperatures than mechanical disk solutions. That means data stays secure and performance remains consistent, even at high temperatures.

And while the efficiency of the data center itself should be a key consideration for anyone, why wouldn’t we take steps to reduce equipment volumes and heat generation in the first place? If we can cut running costs, simplify and cool our data centers and reduce our energy consumption —all at the same time—then I’m not sure that’s even a question to ask. 

Get a monthly roundup of Power & Cooling news, direct to your inbox.