The following article is an opinion from Ian Bitterlin. DCD has published corrections at the foot of this article, along with an article on the use of CFD. DCD is reviewing its processes to avoid any suggestion that our platform is being misused to present opinions as fact, especially where they can be seen to be factually unsound.

What is CFD? A quick Google search will tell you that the initials can stand for a Contract for Difference, a financial instrument that allows traders to invest into an asset class without owning the asset. However, that is not the CFD that this short paper refers to!

Computational fluid dynamics (CFD) is the modern engineering solution to modelling flows of liquids and gasses across solid surfaces and through/around 3D structures. Its accuracy as a mathematical model is enabled by ever-higher computational capacity and speed and it has a lot of applications. For example in Formula 1 aerodynamics, it has made most wind-tunnel testing redundant. In fact, if you have the IT capacity, CFD can be superior to a wind-tunnel as cross-winds and turbulence can be modelled at will.

CFD saves money if the model is right – Sudlows

## Science in the software

The science is built into the software – modelling the density, energy, thermal inertia, viscosity and elasticity (and many other physical attributes) of a very small element of the fluid in question (such as air) and how adjacent elements interact and pass their behaviour from one element to an adjacent other.

Like many branches of mathematics, such as calculus, the smaller you make the finite elements you are modelling the more accurate will be the result. In CFD this means that as you divide up the 3D space into ever-smaller parcels the accuracy of the predicted speed, rotation and direction of a fluid flowing through that space will approach, although never equal, reality.

CFD will always only be a ‘best fit’ based upon how much computing capacity you have and how long you are prepared to wait for the answer to pop out of the machine. A good analogy is the 3D graphics rendering (modelling whole buildings, plantrooms and walk-throughs etc) that started economically in the late 90s but needed the best PC money could buy and overnight rendering of each image but these-days takes a notebook and a few minutes.

So, where is CFD applied in the data center? Well, to very good effect, inside the ICT hardware -modelling cooling air flow through the server or across an adiabatic wetted-pad or through a chiller cooling coil etc. But also, it is used to model air-flow through the room, out of a CRAC/CRAH and into the raised floor (if there is one), through ventilated floor tiles and into the cold-aisle, into ITC cabinets, out of cabinets into the hot-aisle and the route back for the heated air to the source of the cooling.

CFD is also used to explain (or even predict) ‘hot-spots’ in the room where the provision of cooling air is inadequate and in some software products that help to manage your install/change decisions as to where you should install that next power-hungry server. The pictures (both stills and movies) are impressive and pretty – and used extensively as supporting graphics in cooling dissertations.

– TileFlow

## Models versus reality

But are such data-center CFD models accurate? No. More than 99 percent of the time they only represent a snapshot of one possible scenario that almost never exists. The only real benefit, so far, is the proof (if you need it) that aisle containment, blanking plates and hole-stopping is good for energy efficiency – as compared with around half the air bypassing the crucial zone if you do everything wrong.

Let me explain why I said ‘no’ by first describing the use of CFD in the design phase of a typical server. The mechanical designer positions the cooling fans and all the heat-generating components that need thermal management using CFD as a design tool. The space is small and confined so the model can be highly accurate. He places the 20-30 temperature sensors that are installed in the average server at the key points and someone writes the algorithm that takes all the data and calculates the fan speed to keep the hottest component below its operational limit. He can then optimize the physical layout and test the resultant air-flow at will. However, we should note the simple model environment – one source of input air at zero pressure differential to the exhaust air and one (small) physical space with a range of inlet temperatures taken from ASHRAE TC9.9 Allowable. The resulting CFD models will be highly accurate.

Now let us compare this to trying to use CFD to model a data center space. We can list the main assumptions that the modeller must make:

• Several sources of air (fans in fan-walls or distributed CRAC/CRHAs)
• In some systems scavenge fans, as well as feed fans
• A very large room (limiting the resolution and therefore accuracy) with a fixed cabinet layout
• Ventilated floor tiles with a fixed or adjustable ‘open’ cross-sectional area, in a defined pattern/location that may or may not be followed on site
• An underfloor plenum (if used) with an indeterminate number of obstructions such as cabling
• An assumed level of leakage across cabinets
• An assumed level of bypass air between cold & hot-aisles
• Etc

But what to put inside each cabinet? This is where modelling departs from all realty and becomes little more than a seemingly infinite set of pretty pictures.

• kW load? But what, ‘design’, average or even a set variable between cabinets?
• Temperature rise? Delta-T varies by hardware manufacturer and is linked to…
• Air volume throughput (m3/s) from each server in each cabinet
• Position in the cabinet. Top, bottom, middle? Most 42U cabinets in enterprise and colocation facilities are only filled 50 to 60 percent
• Missing blanking plates (a common and far too frequent event)
• Etc

And what to load each server with? The model is now incapable of mimicking reality: If the user enables the thermal management firmware on their hardware (admittedly not as common as it should be) the loads’ fan speed is controlled by the IT load (or more specifically by the hottest component) then each server will be ramping its fans up and down on a continuous basis as the work flows in and out of the facility. In a large enterprise facility, the variations of air-flow are totally random from the modeller’s viewpoint and cannot be modelled using CFD. The fan speed variation with ICT load is unique to each server make/model with some varying kW load from (as good as) 23 percent at idle to (as bad as) nearly 80 percent at idle.

There is even a further complication that current CFD systems could only model on an individual server basis: Large cooling systems operate with a slight pressure differential between cold and hot side – although this is most undesirable as the ICT hardware dictates the air-flow demand and its on-board fans are rated for zero pressure differential between inlet and exhaust. That results in air being ‘pushed’ through the hardware as the server fans slow down. In extreme cases this forces the server fans to idle and it is possible that overheating can occur at certain places inside the server if the internal temperature sensors are not located to protect against forced air cooling. Don’t forget that the cooling air will find the shortest path, not the most effective.

## It’s just pretty pictures?

So there we have it. Multiple variable loads in terms of kW, air-flow and physical layout that are driven by the ICT load in each server and in each cabinet location. In any mixed ICT environment, such as enterprise or collocation, the air flow will be totally unpredictable with our current level of modelling software. The ‘future’ may arrive when DCIM matures into a true ‘manager’ that adapts the M&E infrastructure dynamically in line with the incoming ICT load pattern but until then CFD in the data room may remain just pretty pictures…