Archived Content

The following content is from an older version of this website, and may not display correctly.

The phenomenon we perceive as heat or cold is produced by the motion of molecules. Only at absolute zero (-273˚C) do molecules have no motion. As they become more energetic, their temperature is perceived to rise and their state can change from solid to liquid to gas, and even to plasma when the molecules themselves shake apart. As energy states increase, the rate of collisions between molecules increases and occasionally a photon is knocked off, causing the phenomenon of radiation. At lower energy levels radiation is in the infra-red part of the spectrum, increasing into the visible and beyond at higher energy levels.

The first law of thermodynamics holds that energy cannot be created nor destroyed but may change form. It is one of these changes that creates our server heating problems. Electrical energy arrives in a chip as a flow of electrons that bang into molecules and start them moving faster, producing heat. Those molecules must be slowed down enough (cooled) to avoid damaging the chip.

The second law of thermodynamics holds that when two systems are allowed to interact, they will achieve an energy equilibrium; that is, energy will flow from the more energetic to the less energetic system. The question, therefore, is: what is the best transfer mechanism to remove excess energy? We can choose from radiation, convection (forced or natural), conduction and phase change.

Radiation
At the time of writing most electronics are solid state, so we can assume that our high-energy system is a solid. The lower energy system surrounding it could be a vacuum, gas, liquid or another solid. With a vacuum, the only way for energy to escape the first system is through radiation. According to the Stefan-Bolzmann law, the energy radiated by a black body is defined by: q = σ T4 A (Where: q = watts, σ = 5.67*10-8  (W/ m2K4) = the Stefan-Boltzmann Constant, T = absolute temperature, and A = body area in square meters.)

So, assuming a 33mm x 33mm chip package at a temperature of 70˚C, we conclude we can dissipate only 0.75W through radiation with a perfect black body and surroundings at absolute zero; definitely insufficient.

Conduction
A gas is one step up from a vacuum. There are about 2.7 x 1022 molecules in a liter of air. Those molecules, if packed together at absolute zero, would occupy only 4.7 x 10-8 liter. Not surprisingly, thermal conductivity, k, is only 0.028 W/m-K at room temperature. For every watt removed from the 33mm x 33mm chip, there would be a temperature difference of 800╦ÜC per inch (25mm) of air between the hot chip and the cold body.

Water, one of the more popular coolants, has 3.3 x 1025 molecules per liter – over a thousand times denser than air. Naturally, this implies a higher conductivity – 0.58 W/m-K, 20 times higher than air – dropping the temperature difference to 40╦ÜC per inch.

Aluminum has 6.02 x 1025 molecules per liter. Its conductivity is 205 W/m-K – 350 times that of water. The one-inch temperature gradient is just 0.11╦ÜC – over 7,000 times better than air. Clearly, aluminum or other high-conductivity metals such as copper win hands down for conductivity. The molecules are trapped in a crystalline matrix, where they vibrate and pass energy to their neighbors. Liquid is almost as dense but the molecules move freely (good for convection), but they don’t readily pass on their energy to other molecules. In a gas the molecules are so few they rarely collide, reducing conductivity even more.

Natural convection
As far as I know, nobody is using liquid convection for cooling. In fact, the only place where I have seen natural water convection used is in British plumbing (boiler at the bottom, receptacle in the middle and a cold tank above) – and we have all heard the jokes about that.

To compute the heat removed by natural convection in air, no less than 14 parameters must be taken into account. Even then some are approximations or simplifications by worthies from centuries past – Rayleigh, Reynolds, Prandtl, Nusselt and Grashof, for example. Fortunately, there is a simplification, thus: h (heat transfer co-efficient) = C*((T1-T2)/L)n = 3.77 W/m2K

C and n are dimensionless co-efficients, which can be assumed to be 0.59 and 0.25, respectively. T1 and T2 are the temperatures of the hot body and cold plate, respectively. L is the distance between the hot body and a cold plate – 25mm in this example. Thus for our 33mm x 33mm CPU, the gradient would be 5.8╦ÜC per watt.

Conclusion: Natural convection may work well for lower-power chips (<5W).

Forced convection
Both gases and liquids can be used in forced convection systems. We will only discuss air and water in this context. Air is generally ducted to where its cooling effect is required, but water is constrained in piping and heat exchangers.

Air
While the number of parameters required to derive the heat transfer co-efficient grows to about 18, there are some simplifications that can be used for sanity checks. One of the simplest for air at standard temperature and pressure that can be used for heat sinks is: Theta (Θ) = 916*(L/V)0.5/A in W/╦ÜC  (Where L= heat sink length in inches, V = air velocity in ft/min and A = total surface area in square inches).

However, this and even more sophisticated models are no substitute for in-situ measurement. Figure 1 below shows a representative example of the difference between datasheet values and those derived using the heatsink calculator provided on some manufacturers’ websites. The derived values use exactly the same values as the formula mentioned earlier. Note the ~2x difference between curves. dP shows the pressure drop required to achieve the stated air flow.

 

Adding to the complexity is the variation between servers. The same heatsink will perform differently as the externalities vary. These include ducting, positioning of each CPU (if more than one), DRAM and VRM layouts. The other significant factor is the fans’ specifications. They must be capable of providing sufficient volume and pressure to drive the air through the heatsink(s) and not consume too much power doing so.

To establish operating requirements, we look at the maximum allowable CPU lid temperature and CPU power. Typically, 70╦ÜC has been the allowable maximum, but excursions up to 95╦ÜC may be allowed in the future. Maximum power for high-performance CPUs is commonly 130W, even though most servers may be equipped with only 95W CPUs. Assuming that the maximum operating inlet temperature is 45╦ÜC, we would have a margin of 25╦ÜC.

Thus, the allowable thermal resistance would be 25/135 = 0.185╦ÜC/W. As can be seen from Figure 1. That is the maximum capability of the heatsink. At that point, the fans must deliver 50CFM, with a static pressure of 0.35” of water.
 


Figure 2 above shows a typical set of operating curves for two fans. When operating at maximum power they should be at the inflection point, delivering 30-40CFM.

Typically, a 2U server heatsink is about 3.5” wide and 2.5” tall. Banks of DRAMs will be deployed on one or both sides of the CPU (see Figure 3 below). In the case on the half-width board on the left there is room for two fans, mandating the choice of the more powerful fan.

 



The motherboard on the left will draw 60 watts. Further, at least 50% of the air will bypass the heatsinks, producing borderline performance in normal operation. A fan failure will cause the CPU to throttle in order to stay within the thermal envelope, thus losing performance. The system on the right is a little more forgiving, but a fan failure still has the potential to affect performance. Potentially, its fans could draw up to 150W – an additional 30% load.

As the power consumed by a fan is proportional to the volume of air flow (CFM) cubed, from an energy efficiency point of view it is better to have as many fans as possible. For example, if one fan could produce adequate air flow for cooling at 32W, two of the same fans sharing the load would only consume 8W. (Note the energy of the fans adds slightly to the air temperature, but is usually low enough (<1˚C) so as not to be a significant factor).

After the heat is exhausted from the server it is either sucked into a cooling unit, which is itself cooled by water or pumped refrigerant, then re-circulated to the server inlets or exhausted to the atmosphere. In the latter case, fresh outside air is directed to the server inlets.

For a rack with 80 server motherboards drawing 450W each, for a component load of 36kW and typical fan load of 6kW (75W/server), approximately 445,000 cubic feet of air (12,600m3) needs to be re-circulated with its fans to maintain a 10˚C air temperature rise at the server exits. (It should be noted that the external environment can also affect fan performance. Passive rear door heat exchangers and cabling are the two biggest problems. They can block server exhaust and reduce efficiency).

Water
Water is easier to handle than air, as it can be piped exactly to where you want it to go. Most systems consist of three components – in-server, in-rack and exhaust. In all known systems the in-server component connects to the in-rack distribution system via two quick connects.

They also come in two flavors: IBM, and everybody else. The IBM version is solidly engineered, with all cooling components connected with brazed copper tubing. In Figure 4 (below), it can be seen that each hot component has an individual cooling block. Very little, if any, air cooling is required.
 


 

The representative of the “others” cools only the CPUs and is interconnected with flexible tubing and plastic connectors. Air cooling is required for all other components, including DIMMs.
 


Figure 5 (above) shows the rack-level plumbing of a typical water-cooled system. Most of these systems are advertised as having the ability to be cooled with hot water, and they do remove heat quite efficiently. The block in contact with the CPU or other hot body is usually copper, with a conductivity of around 400 W/m-K, so the temperature drop across it is negligible.

If water is pumped slowly enough, reducing pump power, flow is laminar. Because water is not a good conductor of heat, a temperature drop of around 5╦ÜC can be expected across the water copper interface. This is usually negligible, but if necessary it can be reduced by forcing turbulent flow by increasing flow rate. 

Both server types have two CPUs plumbed in series: maximum power consumption of each is around 130W. If we assume the maximum lid temperature is 70˚C and the inlet water is 40˚C, each CPU could heat the water 10˚C while accommodating the thermal resistance of the water film and the cold block itself.

For a rack with 40 servers, 160 CPUs (21kW), about 1.8 cubic meters of water per hour would be required. Pump energy would be around 80W. Of course, another 15kW (450W total per server) remains to be removed by fans. Clearly, the racks cannot be deployed at maximum density, resulting in a power density of around 600W/sq ft, without special provision such as rear door heat exchangers. 

While the physics of the system are workable, the statistics may not be. Let’s be optimistic and assume the MTBF (mean time between failure) of a liquid connector is 107 hours and the service life is three years, or 26,280 hours. The probability of survival is e^-(26280/10-7) = 0.9974, or a 0.26% probability that it would fail. If there were 1,000 servers and 2,000 connectors, about 5 would fail. This calculation would be reasonable for the IBM system, where all the connectors are brazed to the piping. Where flexible tubing and plastic connectors are in the mix together with the vibration of fans, the probability of failures increases.

Finally, water chemistry is difficult. Described as the ‘universal solvent’ it can eat through metals and plastic if it has not been pre-treated properly. Another concern could be algae growth. A closed secondary loop to the components is essential to manage such issues reliably. A leak in such a loop might bring the entire loop and its associated servers down.

Phase-change
Two mechanisms are used to remove heat: thermal conduction to a phase-change fluid. Heat is conducted to a single plane by a series of heat risers placed atop each component that generates a significant amount of heat. In all cases, these include CPUs, VRMs, DIMMs and system glue, plus, if merited, networking and other components generating over ~2W.

Heat risers sit at the top and only the bottom server is covered by a cold plate. The cold plates are a chassis component and are all permanently soldered into refrigerant distribution manifolds. This eliminates the probability of leakage from connectors. Liquid is pumped through cold plates placed upon heat risers attached to CPUs, DIMMs, VRMs, etc. The heat causes the liquid to boil, absorbing 93 times as much heat as the same weight of water.

The liquid and gas mix is then passed to a heat exchanger where it is re-converted to 100% liquid. Unlike air-cooled systems the thermal resistance between heat source and liquid is so small that high coolant temperatures can be tolerated. No chiller is required in most cases. The only energy required is for circulation pumps and external fans in a dry or adiabatic cooler. The cooling PUE (power usage effectiveness) can be as low as 1.03.

The maximum power consumption of a CPU is around 130W, and we assume the maximum lid temperature is 70╦ÜC. As the system is isothermal, the cold plate is the same temperature virtually everywhere. Heat input just causes liquid to change to gas with no temperature rise. Assuming the inlet refrigerant was 40╦ÜC, and having established by measurement that the thermal resistance from CPU lid to refrigerant is <0.2╦ÜC/watt, the CPU lid would reach 66╦ÜC (40 + 130*0.2). Because the gasification causes bubble formation, hence turbulence, laminar flow film formation is not a problem. For a whole rack with 160 servers (72kW @ 450W per server), about 0.66 cubic meters of refrigerant per hour would be required. In practice, with viscosity of refrigerant 25% and fluid flow 10% that of a water-based system, pump energy is very low – about 30W.

The benefits of such an efficient phase-change cooling system are striking:

- Very high power densities can be achieved

- 100kW racks enable data center density of 4,000 W/ft2

- Rack floor space for a 10MW data center can be reduced from 50,000ft2 to 2,500ft2

- Data center construction and facility costs drop – 50%