Archived Content

The following content is from an older version of this website, and may not display correctly.

In physics, the ‘Big Crunch’ theory proposes that, just as the universe expanded in the Big Bang, so too will it contract, recollapse and ultimately end. It’s a surprisingly fitting analogy for IT, and I predict that we’ll see something akin to a Big Crunch in the decade ahead.

Don’t worry, we won’t end up in a black hole. We’ll just have smaller and denser computing – the kind that could make air cooling redundant and radically transform the traditional data centre or HPC environment.      

Iceotope product
Iceotope’s Petagen – Peter Judge

Spreading out the heat

We’re already creating super-dense IT that is incredibly difficult to cool using fans. This is bad news since that’s what you’ll find in almost every data centre on the planet. That might sound alarming but, looking back through the history of physical IT, you can start to see a clear picture of how we got here. 

Over the last 30 years, since the introduction of CMOS chips, the power cost of processing has halved every 18 months, it’s known as Koomey’s Law and is closely related to the better known Moore’s Law. You see, air cooling in a server has a given thermal envelope and as a chip gets twice as efficient in terms of compute per watt then it can become twice as fast – it is this improvement in energy efficiency that drives Moore’s Law.

Chip and motherboard manufacturers have dealt with increasing heat emissions from high performance servers, the kind you’d find in the average data centre or HPC facility, by spreading out the heat to fill the allowable thermal envelope of air cooling. This is why we have two, four or even eight processor servers. However, unless something dramatic happens, spreading out the heat is no longer going to work.

Transmission burns energy

The problem is the energy cost of the transmission of data, which has not been falling as fast as the cost of computing that data. In other words, the ratio between the energy used by the processing of data and the transmission of data (for example to the RAM and PCIe bus) is changing, making the interconnect a much larger share of the total energy consumption.

Years ago, mobile phones had batteries that would last a week, not a day. Battery life is being degraded - but not by the increased processing power. The power consumption of the increasing data transmissionis to blame. Don’t believe me? Turn off the data access, wifi, 3G and 4G – your battery will last ages.

Twenty years ago data centres had 1kW of equipment in a cabinet. Ten years ago this was 5kW. Data centres being built today usually aim for 10kW or even more. Some HPC data centres where interconnect is key to the collaborative performance of the network of servers are aiming for 20kW to 30kW, which approaches the limit of air cooling.

To counteract the rising energy cost of interconnect between processor cores, processor and RAM, and processor and storage, the industry is looking to reduce the distance of transmission. By 2020, it is likely that RAM will have moved from 5cm from the processor to 5mm from the processor. Local storage will have moved from 30cm away to 3cm away.

This interconnect issue is going to make it harder to build multi-socket servers and more likely that servers will have a single socket per node (although multiple nodes per board is a possibility).

Cray II retro supercomputer
Cray II supercomputer – Cray Supercomputers

Too dense to cool

So what does all this mean? Well, this new super-dense bundle of processor, RAM and storage is going to be very difficult to cool with air-based means, making liquid cooling in one form or another essential. Nicolas Dube, distinguished technologist at HP believes that “By 2020, the density and heat we will be seeing will require liquid cooling.”

I’m inclined to agree, and would also add that required or no, liquid cooling is becoming a force to be reckoned with in IT once again (as per the days of the Cray II, pictured here). It can offer numerous density, efficiency and performance benefits over air cooled alternatives and there is now a much healthier range of liquid cooled products on the market – which the IT industry is taking very seriously indeed.

Hydrophobic IT managers might now be shaking in fear, but liquid cooling is not necessarily that scary. It can look and behave the same as a normal server system. It’s not all open tanks resembling your local fish and chip shop.

Liquids probably already exist in your data centre. Inside your CRAC units, they are there at high enough pressures to soak your IT gear in the event of a failure. In fact, liquid cooling will protect you against other liquid risks, was your servers will be sealed, providing other reliability benefits. 

Mike Patterson, ‎senior power/thermal architect at Intel recently suggested as much in an interview on the subject of liquid cooling: “In almost every instance of liquid cooling, the processor will run much lower and be more reliable. With liquid cooling, there will always be less temperature fluctuation than with air cooling, which will help reliability.” Certainly a compelling endorsement for the future of liquid cooling and one that I wholeheartedly agree with.

Fear not, the future is coming, and it’s fluid.

Peter Hopton is chief visionary officer of Iceotope.