France-based Qarnot Computing makes a nice looking radiator - but as well as heating the room, the Q.Rad performs high-performance computing tasks for remote clients.
“When users turn up the thermostat, enough extra computation is rustled up from corporate clients to increase the emitted heat,” says an article from the BBC’s Peter Day, one of the more recent journalists to cover Qarnot, a company founded by Paul Benoit in 2010.
Working in the IT department of as large French bank, Benoit was concerned about the amount of heat wasted by the bank’s servers. He wanted to use this heat, instead of venting it to the outside world.
But using that heat turns out to be tricky. Qarnot is his solution to what is, essentially, a problem of supply and demand.
The effort of cooling
All the energy that goes into a server eventually comes out in the form of heat, and that heat has to go somewhere. Most sites use active cooling, where chillers or other equipment remove the heat - using more energy to remove the waste.
Data centers can move to passive cooling systems which use less energy to remove the heat. But why can’t we use this heat productively?
Using waste heat is essentially a recycling scheme reclaiming value from waste. Just as in recycling schemes for paper or glass, you have to have a demand for the stuff you are throwing out - and you have to supply it in a good enough form to be useful.
And finally, if there really is a demand for it, you have to make sure you keep producing it.
Big data centers have trouble with all of these things. Firstly, the heat is usually in the form of hot (or warm) air. This is not a concentrated form of heat, and can’t be transported. Like low-grade waste paper, there’s a limit to what you can do with it.
You can use it to heat rooms in the same building, but data centers aren’t always in the same building as office space or homes. There may be no demand for your hot air - and in most climates, the office won’t want any at all in the summer.
Also, what happens if your data center is idle on a cold day, when your heat-customers want heat?
Moving to water
Some data centers move to hot water, because fluid is more efficient for cooling. Most high-performance computing (HPC) sites are cooled by water or other fluids, and there’s a body of opinion that general sites will eventually move that way.
Fluid carries heat off in a more concentrated form - which means it can be re-used more easily. And HPC sites have a real benefit in that most are running continuously so, for instance, an IBM supercomputing site in Zurich can be part of a district heating system.
Firms like Iceotope are moving towards making this idea usable in more general data centers.
The closer the fluid gets to the servers, the hotter it can be run, and the better the quality of heat output. Water at 70C can move some distance, and be used for district heating systems, or in some industrial processes.
The trouble is that data centers aren’t necessarily located close enough to anyone that wants the heat.
Which is why we have the idea of computing radiators. A paper from Microsoft Research proposed the idea of the “data furnace” three years ago, performing remote tasks and heating a building.
Heating in reality
As far as I know Qarnot is the furthest anyone has got to a real data furnace product. The company puts four servers into the back of the radiator, and distributes processing tasks to them with its own Q.ware cloud software.
My gut feeling is that - at least right now - Qarnot has an uphill struggle. There are big savings to be had by aggregating servers in a large site. In most cases, I’d expect them to be greater than the savings from re-using the heat.
If we really are sending compute jobs out only when heat is required (as the BBC article suggests) then we have a lot of idle hardware, and quite possibly we have corporate clients whose jobs have to wait till someone puts their heating on.
I suspect this is not how the world works, but I may be wrong.
For now, I still think hot water in centralised sites will eventually be the way to go.
A version of this article appeared on Green Data Center News.