We have all heard, in hundreds of ways, that AI, especially generative AI, will transform the world. By now, we’ve all used various tools. We’ve read about the dozens or hundreds of use cases AI can enable. We know that AI could unlock new ways to transform the nature of work and the impact companies can deliver.

One hope, touted by many, is that AI will deliver new capabilities to improve sustainability, efficiency, and performance objectives, whether for corporations or countries. We hope AI can improve energy efficiency, reduce waste, make products more sustainable, and cut supply chain emissions.

AI certainly has the potential to support these objectives, but there’s a strange paradox surrounding AI and sustainability.

Generative AI is an unprecedented technology; it’s going to be applied to many initiatives, and it’s likely to significantly impact environmental goals. At the same time, generative AI has the potential to block or limit progress toward those goals.

The potential problem is easy to understand. Will the hardware needed for generative AI consume so many resources that organizations find themselves missing sustainability targets? Will massive growth in new data centers increase emissions and over-utilize scarce resources to the point that environmental goals can’t be met?

Perhaps this seems unlikely, but the Synergy Research Group reports that hyperscale data center capacity will almost triple in the next six years, driven by AI. That growth, of course, does not include the colocation providers and enterprises who are rapidly retooling, retrofitting, and pursuing new data center builds to position themselves for a future of AI.

It’s still early to discern just how much environmental impact we’ll see from accelerated data center construction. But it’s easy to see some possible impacts from a push toward supporting generative AI at scale. Some of these impacts could include:

  • Massive power utilization: Gartner believes that AI will consume more power than the human workforce by 2025. In our discussions with a single colocation leader, they anticipate needing an additional 5GW of power capacity by 2027, up from their current 2.7GW.
  • Power generation and power distribution: Already, power grids are stretched, and regulators are raising alarms. In Virginia, Dominion Power announced that they could not supply power to planned data centers. We are already seeing demand for new power plants and power distribution lines to cope with data center growth, which comes with its own sets of resource consumption and increasing emissions.
  • Land use, water use, and construction material use: Land in favorable areas is in short supply, water utilization is under political pressure, and construction materials often produce massive amounts of greenhouse gases.
  • Construction itself: Land clearing, materials transportation, construction equipment, and assembly all use up resources, and eco-conscious data centers are rethinking their designs to cut carbon emissions.
  • New hardware: Not just servers, storage, and networking, but GPUs and DPUs, cabling, new racks, and power distribution. AI clusters require thousands of servers, and server manufacturing creates emissions from mining the minerals needed for production to distribution.

Fortunately, there is a proven technology that allows organizations to cope with server sprawl and unprecedented data center power consumption. It cuts land use, construction material use, and water use while speeding time to completion for new data centers. This technology is liquid cooling, and you’ve heard of it, especially recently.

Liquid cooling is becoming the hottest topic in the industry because, to put it simply, it’s necessary if organizations want to grow their generative AI infrastructure at scale without compromising both their direct and indirect environmental goals.

With liquid cooling, organizations have a new tool that reduces some of the critical emissions and resource challenges they’ll face in coming years. Liquid cooling can

  • Increase cooling efficiency, a necessary feature as AI servers currently consume 3kW or more. Conventional air cooling is highly inefficient and requires much more electricity and water than liquid cooling. Liquid cooling reduces energy waste.
  • Cope with unprecedented rack densities of 80kw of servers per rack or more. This simple change has a collection of domino effects, including reducing the number of racks, the amount of cabling, and the floor space needed.
  • Enable data center providers to construct smaller data centers without compromising compute capacity, reducing the volume of construction material and minimizing construction time.

It’s our view that liquid cooling is a necessity for generative AI workloads and that it’s going to become a key element for meeting sustainability objectives in the future as organizations build more and more data centers to support the opportunities AI can offer.

To learn more about the Nautilus perspective on liquid cooling check out our blog at https://nautilusdt.com/news-updates/.