Driven by rising power densities and heat levels, data center cooling strategies have changed dramatically over time. Until relatively recently, most cooling schemes relied on so-called ‘chaos’ air distribution methodologies, in which perimeter computer room air conditioning (CRAC) units pumped out massive volumes of chilled air that both cooled IT equipment and helped push hot server exhaust air towards the facility’s return air ducts. Chaos air distribution, however, commonly results in a wide range of significant inefficiencies, including:
Re-circulation – Typically caused by poor rack hygiene and insufficient cool air available at the face of the rack, hot exhaust air can find its way back into server air intakes, heating IT equipment to dangerous temperatures.
Air stratification – In an attempt to provide cooler air at the top of the face of the rack, the natural tendency of air to mass in different temperature-based layers can force set points on precision cooling equipment to be lower than recommended. Often, in an attempt to remediate air stratification, technicians increase the fan speed of CRAC units to deliver more cool air to the room, which can result in bypass air.
Bypass air – The velocity of the cool air stream exceeds the ability of the server fans to draw in the cool air; as a result, the cool air shoots beyond the face of the IT rack. Cool supply air can join the return air stream before passing through servers, weakening cooling efficiency.
Eager to combat the inefficiencies above and keep pace with steadily climbing data center temperatures, businesses often adopt hot aisle/cold aisle rack orientation arrangements, in which only hot air exhausts and cool air intakes face each other in a given row of server racks.
Such configurations generate convection currents that produce improved airflow. Although superior to chaos air distribution, hot aisle/cold aisle strategies have proven only marginally more capable of cooling today’s increasingly dense data centers, largely because both approaches ultimately share a common, fatal flaw: they allow air to move freely throughout the data center.
This flaw eventually led to the introduction of containment cooling strategies. Designed to organize and control air streams, containment solutions enclose server racks in sealed structures that capture hot exhaust air, vent it to the CRAC units and deliver chilled air directly to the server equipment’s air intakes. This results in a series of important benefits:
Improved cooling efficiency – By preventing the supply and return air streams from intermingling, well-designed containment solutions eliminate wasteful re-circulation, air stratification and bypass airflow.
Increased reliability – Eliminating re-circulation spares servers from exposure to potentially dangerous warm air that can result in thermal stress which decreases the life of IT equipment
Lower energy spending – To counteract the effects of re-circulated exhaust air, legacy cooling schemes typically chill return air to 55ºF/12.78ºC. Containment-based cooling systems completely isolate return air, however, so they can safely deliver supply air at 65ºF/18.34ºC or higher. As a result, containment cooling strategies typically reduce CRAC unit power consumption by an average of 16%.
Greater floor plan flexibility – To generate the cooling convection currents that make hot aisle/cold aisle strategies work, companies must place their server racks in rigidly aligned, uniformly arranged, rows. Containment strategies don’t rely on convection, however, so they empower data center designers to position enclosures in any configuration that best fits their needs.
Impact on design
Despite the revolutionary impact of containment strategies on data center cooling, most organizations continue to plan new computing facilities the same way they always have. First they design a building and devote some of it to the data hall or white space. Then they fill the white space with as many server racks as it will hold.
Designing data centers in that traditional manner can create a wide range of problems. For example, an undersized or oversized power and cooling infrastructure can limit operating capacity or increase capital expenses. Inconveniently located structural elements can force containment ducts to bend and detour in ways that reduce their efficiency, and inadequate room dimensions can complicate server rack placement and produce wasted floor space.
As a result, companies are increasingly recognizing the wisdom of designing data centers not from the walls in but from the server rack out. Instead of building a room and then filling it with racks, they’re selecting the ideal racks for their needs and designing the room around them. Instead of under or overprovisioning their new facility’s power and cooling resources, they’re installing the optimal infrastructure for the precise array of hardware and enclosures they’ll be using. Instead of improvising solutions to efficiency-sapping structural defects, they’re preventing those defects from occurring in the first place. The end result is a data center that’s not only less costly to cool and maintain, but more reliable and better suited to business requirements. They can also allow for significantly increased intensity usage.