Technology transfer company Aquila, together with Clustered Systems, has delivered Aquarius, a liquid-cooled supercomputing system built in standard Open Compute Project (OCP) racks.
The racks, first announced last year, are now available to order, and will be on show at DatacenterDynamics’ Colo + Cloud event this month in Dallas. The system adds liquid cooling from Clustered Systems to OCP standard Open Racks. The cold plate system can remove the heat from high temperature servers while allowing them to be easily removed and changed, providing high density high-performance computing (HPC) “without trade-offs”.
“The drive towards Exascale computing requires cooling the next generation of extremely hot CPUs, while staying within a manageable power envelope,” said Bob Bolz, HPC and data center business development at Aquila. “Liquid cooling holds the key.”
The system uses warm water to cool the servers, and reduces the cooling budget to less than five percent of the data center cost. Using a fixed cold plate instead of running liquid to individual heatsinks, the system addresses two barriers to water cooling: reliability and re-use.
By keeping the cooling to a separate circuit with a cold-plate, the system minimizes the us of plastic hoses for piping water and the need to change that piping with any system upgrades. There are no water pipes or plastic hoses near the processors, and water leaks are minimized. Effective and stable cooling for the servers should actually reduce the failure rate of the chips in the servers, Aquila claims.
Hot-swappable boards are supported by the OCP architecture, with a hot swap power board designed by Aquila to block any power spikes that may cause instability while adding or removing servers.
Other systems with complex cooling circuits have hit problems because dissimilar metals in the circuit have caused corrosion, Bolz said. The more simple Aquila design eliminates this problem.
The cold plate also means that existing off-the-shelf hardware needs only minimal changes to fit the new system: the servers have to contact the cold plate. Warm water in the data center, such as the exhaust from existing rear door chiller systems can be used as input, so no additional cooling infrastructure is required to expand compute capacity.
It can take as much power to cool servers as it does to run them, so the system might offer up to 50 percent energy savings on the server power envelope, and pay for itself in one year, Bolz told DCD. Assuming enough power is available to the rack, the system can increase server density by a factor of five or ten. The system also eliminates all fans and individual power supplies.
“Compute power requirements for HPC, mobility, OpenStack, and SDN data center applications continue to escalate the need for solutions that address density,“ said Phil Hughes, Clustered Systems CEO. “And data centers cooled with air in a conventional fashion have fundamental restrictions on power density. Warm water liquid cooling has none of these restrictions and can be packed very densely, without a need for specialized building.
Better than the rest?
Although the design uses the standard OCP rack, which is open source hardware, it’s built as a modular insert, so there’s no requirement for Aquila and Clustered Systems to open source the specs. The system allows twelve cold plates in each OCP modular insert, each cooling three servers, which allows 108 servers per rack. It supports a standard Intel 6.4in board, with any E5-2600V3V4 Xeon processor.
The system can cool up to 100kW per rack, which is higher than competitors such as CoolIT, Bolz said. It also supports more servers than Penguin’s Tundra, which uses an Asetek cooling system, he said.
As well as traditional supercomputing applications, the system could allow remotely managed closet-sized supercomputers, located where edge computing demands them. because it does not need a data center environment, Bolz said. The company is investigating liquid-cooled switches and power supplies as well as the servers.
Production units will be shipping this quarter (Q4 2016).