Liquid cooled systems are normally pretty exotic. They use complex plumbing, or else they plunge the electronics in a bath of fluid, which creates different technical difficulties. These complexities are what put people off using liquid cooled systems.
If liquid cooling is to progress, it’s going to be through becoming more simple and more standard. But it’s not that easy to get liquid cooling to the heat-generating parts of an IT system.
Pipes are a risk
If you don’t go with a bath of dielectric, you either keep the water at arm’s length from the servers, circulating it in the rack doors (which doesn’t cool that effectively), or you take it to heatsinks on the processors and other components, using a network of hoses.Those hoses can be fragile and at the very least make it more fiddly to change and upgrade components.
Aquarius, a liquid-cooled high-performance computing system, uses a plumbing system which minimizes joints and maximizes cooling to the processors, and it manages to package the whole thing up in the Open Compute Project’s standard Open Racks.
The approach uses a fixed cold plate, which is cooled by liquid circulating past one side. The other side is in thermal contact with heatsinks on the processors in the system. This means that any server board can be taken out without having to alter any hose connections.
Aquila, a tech transfer company, is working with cooling technology from Clustered Systems.
We heard this was coming in 2015. This year, Aquarius launched its HPC system, which packs some 108 servers into a rack that consumes up to 100kW. Very little of that power goes to the cooling though. The water goes into the system warm, and comes out hot. This means that Aquarius doesn’t need its own cold water supply - it can be fed with the outflow of other cooling systems, such as liquid cooled rack doors.
If liquid cooling is to progress, it’s going to be through becoming more simple and more standard.
In other words, if your data center already has liquid circulating in older style door cooling systems, you don’t need to add much to put in one of these units.
On the IT side, Aquarius makes use of the Open Rack benefits - the dense packing of servers, and the simplified power distribution which means server boards can be swapped easily without a danger of power spikes.
Taking it to the edge?
It’s still not clear whether this needs to go mainstream. Webscale providers are still finding it easy and cost-effective to cool their servers with air (especially as free air cooling, chilled water and evaporation work very well).
Clustered and Aquila point to several benefits. For instance, these systems can be deployed in smaller units, and don’t need traditional data cooling, so they could come into use for closet-sized “edge” data centers.
So last week, at SC16, Aquila followed up its HPC system with a plan for a modular edge data center, due to be delivered with TAS energy in 2017. That’s an interesting possible use - let’s see if customers agree.
A version of this story appeared on Green Data Center News