Just a few months ago, Phoenix, Arizona, reached 113F. Austin, Texas, hit 110F (43C). San Jose, California, sweltered in 109F heat. Boston and Chicago broke their highest temperature records.
And those heat waves aren’t confined to the United States. Europe had the hottest summer ever, with Hamburg and London reaching 104F. Rome reached 105.4F. Even Japan saw the worst heatwave in 150 years.
So we had a hot summer, that’s no surprise. Anyone who has paid attention knows that hot summers are the new normal for much of the world, as the last eight years have been the hottest years on record.
But why should that matter to us? Because the 2022 heat wave knocked data centers offline.
- Data centers at the UK National Health Service failed, forcing staff at multiple hospitals to use paper for record-keeping for over a week.
- Google and Oracle data centers in London overheated, taking dozens of services offline.
- A Twitter data center in Sacramento experienced a total shutdown due to heat, putting Twitter in “a non-redundant state.”
And these are just the failures that were publicly attributed to the heat.
In the past, we operated on the assumption that extreme temperatures were rare anomalies. Unfortunately, the past few years show us that extreme temperatures are becoming more common, and we have to think through the risks to data center operations caused by a new normal of excessive ambient temperatures.
As data center designers, providers, and operators, we all know that data center cooling systems operate within a range of temperatures, and increased heat puts added pressure on them.
Since most data centers in operation were built years ago, when expectations for ambient temperatures were lower, according to the Uptime Institute, this most recent heatwave either approached or surpassed design specifications for ambient operating temperatures.
What are some consequences of being forced to operate outside design specifications? Customer-impacting failures, of course, and though we all build in redundancy, in the cases of Twitter, Google, and Oracle, that wasn’t enough. Reduced MTBF? Possibly, driving up costs. Issues with supporting technologies like backup power generators? Yes, they might overheat on hot days, causing them to not deliver full nameplate power or even shut down.
These problems are pushing data center operators to rethink their approach to cooling. Since most data centers were built before these extreme temperature events, more and more data center providers will be forced to look at their cooling technologies and decide if changes need to be made.
It’s possible that increasing numbers of extreme temperature events will put so much pressure on conventional air-cooling systems that failures will increase, and risks will become impossible to ignore.
Also, as more and more extreme weather events occur and more citizens experience the reality of climate change, politically, data center providers are going to be expected to become socially responsible, making their activities more sustainable.
Providers will be under more pressure to run their operations efficiently, not just via greenwashing, like purchasing carbon credits, but by fundamentally changing how data centers function to reduce their impact on the environment.
As providers encounter the limits of conventional air-cooled data centers for both practical and political reasons, it’s natural for providers to turn to liquid cooling. Water has 4.23 times more specific heat absorption capacity than air, making it a much more efficient means of cooling.
However, conventional liquid cooling either requires refrigerants, which are being phased out in many countries, or evaporation, which loses large amounts of water to the atmosphere, adds contaminates to the remaining water and puts pressure on water treatment plants that are already straining to cope with growth.
Fortunately, there is a proven, mature alternative cooling technology that sidesteps all the weaknesses of conventional air cooling and liquid cooling in the data center.
For over 100 years, thermal plants and industrial processes have been using water-to-water transfer, utilizing the heat-sinking and energy-sinking capabilities of large bodies of water.
Collecting data center heat and dissipating it in a lake, river, or ocean provides a substantial improvement in energy and water utilization efficiency without the risks of downtime that plague other methods of cooling during heat waves.
Due to the thermal absorption properties of water, the water flow/mixing rates, evaporative cooling, heat energy transfer to the underlying soil, and cooler temperatures at night, natural bodies of water stay much cooler than the surrounding air. Short-term temperature fluctuations don’t impact the temperature of the water at depth, so the water can be used to cool a data center.
However, knowing that it’s possible to do this and knowing how to do this are different things. Few data center operators have the expertise and patented innovations needed to do the job.
Fortunately, there is a way to move forward. Nautilus has commercialized natural water heat sink technology for the data center. We’re able to take advantage of the fact that natural bodies of water stay cool during extreme temperature events.
We have had, since early 2021, in our floating data center moored at Stockton, California, a group of colocation customers running production workloads on data center hardware cooled using our technology.
Of course, the recent heat wave in California was an unprecedented stress test for our approach. Could our closed-loop heat-sinking technology work through a heat wave?
Yes, it could, and we have the data to prove it. During September 1-9, 2022:
- The average temperature of our water intake was 79.9F.
- The average air temperature high was 107.8F (42C), with a peak of 115F (46.1C) on September 6.
- Our ambient data center temperatures averaged 82.9F (28C), nearly 7 degrees under the 89.6F ASHRAE allowable standards.
- The temperature increase from water intake to water outflow was only 4F.
Did our water-cooling technology work? It’s clear that it did. We provided a fully resilient cooling solution for our colocation customers during the worst heatwave in California history. And we did it without consuming a drop of water or relying on refrigerants, a critical advantage in California, which is both environmentally sensitive, but is also suffering from a multi-year drought that has resulted in many state-mandated water supply restrictions.
We’re able to support the state, and the world, by leading the way toward a more sustainable future for data centers.
Interested in learning more about our ability to create new levels of cooling resilience while improving your sustainability profile? Learn more about our capabilities at nautilusdt.com.
More from Nautilus Data Technologies
-
Sponsored Power and water: How can data centers be made genuinely sustainable?
Data centers are made from concrete, cement, steel and glass, and consume vast amounts of power. So how can operators genuinely become “sustainable”, let alone achieve net-zero?
-
Nautilus Data Technologies appoints Rob Pfleging as new CEO
James Connaughton becomes chairperson of floating barge business
-
Sponsored The intersection of ESG and IT
When ESG meets IT, a new metric is needed