Archived Content

The following content is from an older version of this website, and may not display correctly.

There is no Moore’s Law equivalent in cooling technology. As processors relentlessly get faster and denser, the amount of high-power servers a company can screw onto a data center rack is limited by the amount of power available to that rack and the amount of cold air the facility is able to push to it.

 

The low-hanging fruit of facilities efficiency has been picked in modern data centers – things like hot-aisle containment, air-side economization, blanking panels and marginally higher ambient temperatures – and there has not been much innovation in cooling nearly as disruptive as innovation that happens on a regular basis in IT.

 

That is not to say nobody is trying to introduce that level of disruption to the world of cooling. One person who is trying is Mario Facussé, founder and CEO of Xyber Technologies. His idea is to push an existing and widely used cooling technology – heat pipes – to the limit of of its capabilities, a limit he says is far beyond what the technology does today.

 

“Heat pipes have been used in electronics for a relatively long time,” Facussé, a Honduras native, says. In computers, heat pipes usually carry heat from the CPU to a heat sink, which is then cooled by fans. What is innovative about Xyber is the company has a server design that uses heat pipes to move heat outside of the server efficiently enough that no fans are required inside the server and no cold air is required on the facilities side.

 

A heat pipe is a sealed pipe made of a thermally conducive material (Xyber uses a special type of anodized aluminum) with liquid inside. Simply put, the liquid evaporates on the end of the pipe connected to the source of heat and travels, through a porous wick to the cooler side, where it condenses back into liquid before returning to the hot end.

 

In typical server designs that use heat pipes, the hot end of the pipe is attached to a CPU, while the cold end is attached to a heat sink. In Xyber’s design, the server itself is the heat sink.

 

“The entire server assumes the role of the heat sink for all the components,” Facussé explains. The only thing that is needed for the heat to leave the server completely is air flow. The facility still has to deliver air flow to the server, but the air does not need to be cooled. The only requirement is for the air to be cooler than the box. So, if the box is 40C, room temperature of 39C will suffice, Facussé says. “It’s a temperature balancer. As long as there’s a difference in temperature, it keeps working.”

 

In some cases, convection, as the heat leaves the server, will create all the necessary airflow. That depends, however, on the components inside. The design Xyber is pitching currently includes four of the latest 10-core Intel Xeon CPUs. This design requires external air flow to be delivered to the server but zero air conditioning, he says.

 

Not only does the approach eliminate the need for computer room air conditioners (CRACs) in a data center, it expands the range of locations a data center can be placed in and use outside air for cooling, Facussé says.

 

Xyber’s business model is based on licensing its server design to server vendors, such as HP or Dell, or to large web-scale users that pack their data centers with servers they design themselves. The latter group includes the likes of Google and Facebook, who bypass traditional OEMs and go straight to the manufacturers for their servers.

 

The company has not raised any outside capital yet, with Facussé funding the venture out of his own pocket, together with Scott Kosch, managing partner of Kosch Capital Management who acts as an investor and adviser to Xyber. Facussé has also been in talks with Sapience Capital Partners. So far, Facussé and Kosch have invested about US$2m in the company.