Two years removed from the beginning of the pandemic, the data center industry has seen an unprecedented boom in digital demand across all industries in order to maintain the functions of our day-to-day lives. In healthcare, for example, telehealth visits increased 63-fold, from 840,000 in 2019 to 52.7 million in 2020.
According to Gartner, end-user spending on public cloud services is slated to reach $482 billion in 2022. In a Harvard Business Review survey, 86 percent of respondents said that artificial intelligence (AI) has become a mainstream technology at their organization, and 67 percent expected to accelerate AI adoption in 2021. These services continue to be as essential as ever, and they only scratch the surface for what’s to come in terms of network complexity.
As today’s networks get more complicated and more distributed, and the augmented and virtual reality applications become more prominent, the need for real-time computing and decision-making becomes more critical. This real-time need is sensitive to latencies, and under the increasingly common hybrid model of enterprise, public and private clouds, colocation, and edge, full-time manual management has become increasingly difficult.
Therefore, AI and machine learning (ML) will be critical to optimizing the performance of these networks and making way for more remote monitoring solutions. Adding to our digital demand is the continued rollout of 5G, which promises to be 500 percent faster than its 4G predecessor and has businesses racing to get a piece of its $23.2 billion in projected revenue for 2022.
These advances inevitably come at a price – in the form of increased computing and heat densities. High performance computing (HPC) has rapidly accelerated to support AI, ML, and 5G, and it solves numerous enterprise business challenges. For many data center operators, this will soon create the necessity for high density cabinets and data centers that will require infrastructure changes to cool these critical systems.
As rack densities approach and exceed 30 kilowatts (kW), air cooling systems may not be sufficient, no matter how the system is optimized. Despite air cooling’s considerable evolution to address rising densities efficiently, there is a point at which air simply does not have the thermal transfer properties required to provide sufficient cooling to high-density racks. Organizations who ignore these limitations should anticipate higher energy costs, reduced performance and, eventually, delayed implementations.
The most viable alternative to air cooling is bringing liquid cooling to the rack. Liquid cooling leverages the higher thermal transfer properties of water or other fluids to support efficient and cost-effective cooling of high-density racks. Liquid cooling is available in a variety of configurations that use different technologies, including rear-door heat exchangers, direct-to-chip cooling, and immersion cooling.
While liquid cooling is often regarded as a niche application that is years away from mainstream adoption, technology think tanks such as the Open19 Foundation and the Open Compute Project bring together industry leaders to address the challenges presented by continued increases in compute density. Through these collaborations, industry leaders have made great advances and developed several products that help make liquid cooling technology a viable solution for a broader audience.
In simplified terms, here is how liquid cooling works: A cool liquid is circulated to cold-plate heat exchangers embedded in the IT equipment. This provides efficient cooling, since the cooling medium goes directly to the IT equipment rather than cooling the entire space. It can be up to 3,000 times more effective than using air, enabling the central processing units (CPUs) and graphics processing units (GPUs) in densely packed racks to operate continuously at their maximum voltage and clock frequency without overheating.
This, combined with the reduction or elimination of fans required to move air across the data center and through servers, can create significant energy savings for liquid-cooled data centers. Additionally, the pumps required for liquid cooling consume less power than the fans needed to accomplish the same cooling.
Types of liquid cooling
Rear-door heat exchangers are a mature technology that doesn’t bring liquid directly to the server but does utilize the high thermal transfer properties of liquid. In a passive rear-door heat exchanger, a liquid-filled coil is installed in place of the rear door of the rack, and as server fans move heated air through the rack, the coil absorbs the heat before the air enters the data center. In an active design, fans integrated into the unit pull air through the coils for enhanced thermal performance.
In direct-to-chip liquid cooling, cold plates sit atop a server’s main heat-generating components to draw off heat through a single-phase or two-phase process. Single-phase cold plates use a cooling fluid looped into the cold plate to absorb heat from server components. In the two-phase process, a low-pressure dielectric liquid flows into evaporators, and the heat generated by server components boils the fluid. The heat is released from the evaporator as vapor and transferred outside the rack for heat rejection.
With immersion cooling, servers and other components in the rack are submerged in a thermally conductive dielectric liquid or fluid. In single-phase immersion systems, heat is transferred to a coolant via direct contact with server components and removed by heat exchangers outside the immersion tank. In two-phase immersion cooling, the dielectric fluid is engineered to have a specific boiling point that protects IT equipment but enables efficient heat removal. Heat from the servers changes the phase of the fluid, and the rising vapor is condensed back to liquid by coils located at the top of the tank.
Liquid cooling as a roadmap for continued success
If an organization plans to use liquid cooling to support new HPC-related infrastructure requirements and challenges, there are several other benefits beyond efficiency and reliability. Those benefits include:
- Improved performance: A liquid cooling system will not only enable the desired reliability, but also deliver IT performance benefits. As processor case temperatures approach the maximum safe operating temperature, as is likely to occur with air cooling, processor performance is throttled back to avoid thermal runaway.
- Sustainability: Not only does liquid cooling create opportunities to reduce data center energy consumption and drive power usage effectiveness (PUE) down to near 1.0, it provides a more effective approach for re-purposing captured heat to reduce the demand on building heating systems. The return-water temperature from the systems can be 140 degrees Fahrenheit (60 degrees Celsius) or higher and the liquid-to-liquid heat transfer is more efficient than is possible with air-based systems.
- Maximize space utilization: The density enabled by liquid cooling allows a facility to better use existing data center space, eliminating the need for expansions or new construction, or to build smaller-footprint facilities. It also enables processing-intensive edge applications to be supported where physical space is limited.
- Lower total cost of ownership (TCO): In the report Liquid-Cooled IT Equipment in Data Centers: Total Cost of Ownership, ASHRAE conducted a detailed cost of ownership analysis of air-cooled data centers versus a hybrid (air- and liquid-cooled data centers) model and found that, while a number of variables can influence TCO, “liquid cooling creates the possibility for improved TCO through higher density, increased use of free cooling, improved performance and improved performance per watt.”
For organization leaders dealing with the challenges of increasing rack densities, it may be time to recognize the limits of air cooling and consider using liquid cooling to help meet energy and sustainability goals. For those deploying extremely high-density racks (greater than 30 kW), there may be no other choice.
However, this is a complicated process, so it is important for organizations to work with the right partner to ensure the success of any liquid cooling deployment. To learn more, read our white paper, Understanding Data Center Liquid Cooling Options and Infrastructure Requirements.
If you’re attending this year’s Data Center World – AFCOM (March 28-31), Vertiv will be exhibiting in booth #619. Additionally, we will be discussing this topic in our panel, “High Density Compute: Opportunities, Challenges & Roadmap to Success,” on Tuesday, March 29, at 3 p.m. Register today!