Archived Content

The following content is from an older version of this website, and may not display correctly.

It could be argued that cooling applications which are exposed to high ambient temperatures and other climatic hazards such as elevated humidity levels, dust and sand present data center operators in those locations with added challenges, calling for innovative approaches to cooling and higher levels of resilience and redundancy.

Free-cooling
In the northern hemisphere, free-cooling opportunities are relatively abundant. A London data center with a typical room temperature of 24°C is capable of operating in full free-cooling mode for 95% of the year, generating potential energy savings of up to 50% compared with a conventional chiller-based system.

In climates where summer ambient temperatures can soar higher than 50°C, the long-held view has been that free-cooling opportunities are far more restricted, if not unachievable.

However, by raising supply and return air temperatures by as little as 1°C, a significant free-cooling window exists, even in such elevated temperatures.

In the example of Johannesburg, South Africa raising supply and return temperatures by 1°C to 25°C and 38°C respectively could generate annual energy savings of up to 110% using an air cooled system alone and 138% from a free-cooling system. This would result in an annualized energy efficiency ratio of 5.63 achieved through 99% of the year in free-cooling or partial free-cooling.

Whilst raising supply and return temperatures enables data centers to capture extra free-cooling opportunities, international opinion still varies as to what constitutes an acceptable supply temperature before IT performance is affected.

In 2008 the recognised international body regulating data center environments, ASHRAE, revised its recommended dry bulb temperature envelope (for Class 1 & 2) of 20 to 25°C (68 to 77°F) to 18 to 27°C (64.4 to 80.6°F). It did this in order to help reduce data center energy consumption, with no evidence to indicate any negative impact on the reliability of IT equipment.

In addition to financial savings resulting from lower power consumption, the ability to deliver increased cooling duty for a smaller footprint brings further benefits with the creation of extra space within the data center for servers and racks.

There is of course extra complexity associated with raising data center temperatures in that cooling equipment needs to compensate for changes in relative humidity associated with higher dew points in order to prevent latent cooling or condensation on refrigeration coils especially in mechanical (DX) systems.

This can however be managed by an appropriate Building Management System. Similarly, variances in temperature throughout the data center and at varying heights on the rack can be controlled through hot aisle containment or rack-based cooling which helps minimise the mixing of hot and cold air.

Concurrent free-cooling
Airedale pioneered the concept of ‘concurrent’ free-cooling in data centers over 15 years ago. In addition to energy savings achieved from reducing the need for DX cooling, concurrent free-cooling also maximises the part-load efficiencies of components such as EC fans, inverter-driven pumps and centrifugal compressors.

The variable speed control on such components allows load to be very precisely matched to cooling duty, reducing energy consumption and unnecessary wear.

EC fans for example are up to 50% more efficient than AC fans at part-load. By using temperature sensors and sequencer controls cooling can be staged, ensuring a smooth transition from DX cooling to air free-cooling. On sites with an air cooled and free-cooling chiller, the sequencer ensures that the free-cooling chiller is the first to start up when the ambient is low.

Resilience and redundancy
In addition to heat and humidity, the conditions under which outdoor systems such as chillers and condensing units need to operate in tropical and subtropical locations also bring challenges.

System design needs to factor in the risk of ingress of dust, sand and, in some cases, smoke from grass fires as well as potential erosion of external fabric and components by harsh environmental conditions, in order to minimise opportunity for failure and associated system downtime while maintenance is carried out.

Because of this, cooling installations in hot climates are often designed to the highest redundancy levels, namely Tier IV data center status.

Typically, 2N+1 redundancy would provide 100% back-up capability, with at least double the amount of equipment, run independently with no single points of failure.

To ensure resilience under such extremes, systems are often ‘over’ engineered by including larger capacity coils and higher speed fans for example and by suitably resilient protection of the external structure, components and controls from the elements. Robust air filtration control may also be critical in locations where grass fires, sand and dust are prevalent.

Dual fluid
In the Gulf, there is an increasing trend towards the use of chilled water (CW) PAC systems where each circuit is connected to a chiller that produces cold water for all aspects of a building.

These systems are generally more energy efficient than DX systems.

However, the consensus among local consultants and decision-makers is that cooling systems in critical environments should always be supported by two independent cooling media, DX and CW, known as ‘dual fluid’ solutions.

The dual fluid route provides redundancy in that the primary CW circuit is connected to an external chiller, with the secondary DX circuit operating in standby mode. This ensures that the PAC units can continue to function in the case of downtime of the CW circuit.