Data center operators and IT tenants have traditionally adopted a binary view of cooling performance: it either meets service level commitments, or it does not. The relationship is also coldly transactional: as long as sufficient volumes of air of the right temperature and quality reach the IT rack (meeting service-level agreements that typically follow ASHRAE’s guidance), the data center facility’s mission has been accomplished

What happens after that point with IT cooling, and how it affects IT hardware, is not the facilities’ business.

Times are changing

This practice was born in an era when the power density of IT hardware was much lower, and when server processors still had a fixed performance envelope. Processors were running at a given nominal frequency, under any load, that was defined at the time of manufacturing. This frequency was always guaranteed if there was sufficient cooling available, whatever the workload.

Chipmakers guide IT system builders and customers to select the right components (heat sinks, fans) via processor thermal specifications. Every processor is assigned a power rating for the amount of heat its cooling system must be able to handle at the corresponding temperature limit. This is not the theoretical maximum power but rather the maximum that can realistically be sustained (seconds or more) running real-world software. This maximum is called thermal design power (TDP).

The majority of software applications don’t stress the processor enough to get close to the TDP, even if they use 100 percent of the processor’s time — typically only high-performance computing code makes processors work that hard.

With frequencies fixed, this means in most cases power consumption (and thermal power) is considerably below the TDP rating. Since the early 2000s, nominal processor speeds have tended to be limited by power rather than the maximum speed of circuitry, so for most applications there is untapped performance potential within the TDP envelope.

This gap is wider still in multicore processors when the software cannot benefit from all the cores present. This results in an even larger portion of the power budget not being used to increase application performance. The higher the core count, the bigger this gap can be unless the workload is highly multithreaded.

Processors looking for opportunities

Most server processors and accelerators that came to market in the past decade have mechanisms to address this (otherwise ever-growing) imbalance. Although implementation details differ between chipmakers (Intel, AMD, NVIDIA, IBM), they all dynamically deploy available power budget to maximize performance when and where it is needed most.

This balancing happens in two major ways: frequency scaling and management of power allocation to cores. When a modern server processor enters a phase of high utilization but remains under its thermal specification, it starts to increase supply voltage and then matches frequency in incremental steps. It continues to scale the steps until it reaches any one of the preset limits: frequency, current, power, or temperature — whichever comes first.

If the workload is not evenly distributed across cores or leaves some cores unused, the processor allocates unused power to highly utilized cores (if power was the limiting factor for their performance) to enable them to scale their frequencies even higher. The major beneficiary of independent core scaling is the vast repository of single- or lightly-threaded software, but multithreaded applications also benefit where they struggle with Amdahl’s law (when the application is hindered by parts of the code that are not parallelized, so that overall performance depends largely on how fast a core can work through those segments).

This opportunistic behavior of modern processors means the quality of cooling, considering both supply of cold air and its distribution within the server, is not binary anymore. Considerably better cooling increases the performance envelope of the processor, a phenomenon that supercomputing vendors and users have been exploring for years. It also tends to improve overall efficiency because more work is done for the energy used.

Performance is best served cold

Better cooling unlocks performance and efficiency in two major ways:

  • The processor operates at lower temperatures (everything else being equal).
  • It can operate at higher thermal power levels.

The lowering of operational temperature through improved cooling brings many performance benefits such as enabling individual processor cores to run at elevated speeds for longer without hitting their temperature limit.

Another, likely sizeable, benefit lies in reducing static power in the silicon. Static power is power lost to leakage currents that perform no useful work, yet keep flowing through transistor gates even when they are in the ”off” state. Static power was not an issue 25 years ago, but has become more difficult to suppress as transistor structures have become smaller, and their insulation properties correspondingly worse. High-performance logic designs, such as those in server processors, are particularly burdened by static power because they integrate a large number of fast-switching transistors.

Semiconductor technology engineers and chip designers have adopted new materials and sophisticated power-saving techniques to reduce leakage currents. However, the issue persists. Although chipmakers do not reveal the static power consumption of their products, it is likely to take a considerable component of the power budget of the processor, probably a low double-digit percentage share.

Various academic research papers have shown that static leakage currents depend on the temperature of silicon, but the exact profile of that correlation varies greatly across chip manufacturing technologies — such details remain hidden from the public eye.

Upgraded air coolers can measurably improve application performance when the processor is thermally limited during periods of high load, though such a speed-up tends to be in the low single digits. This can be achieved by lowering inlet air temperatures or, more commonly, by upgrading the processors’ cooling to lower thermal resistance. Examples of this are: adding larger, CFD-optimized heat sinks built from thermally better conducting alloy (e.g., copper-based alloys); using better thermal interface materials; and introducing more powerful fans to increase airflow. If combined with better facility air delivery and lower inlet temperatures, the speed-up is higher still.

No silver bullets, just liquid cooling

But the markedly lower thermal resistance and consequent lowered silicon temperature from direct liquid cooling (DLC) makes a more pronounced difference. Compared with air coolers at the same temperature, DLC (cold plate and immersion) can free up more power by reducing the temperature-dependent component of static leakage currents.

There is an even bigger performance potential in the better thermal properties of liquid cooling: prolonging the time that server processors can spend in controlled power excursions above their TDP level, without hitting critical temperature limits. This behavior, now common in server processors, is designed to offer bursts of extra performance and can result in a short-term (tens of seconds) heat load that is substantially higher than the rated cooling requirement.

Typically, excursions reach 15 percent to 25 percent above the TDP, which did not previously pose a major challenge. However, in the latest generation of products from AMD and Intel, this results in up to 400 watts (W) and 420W, respectively, of sustained thermal power per processor — up from less than 250W about five years ago.

Such high-power levels are not exclusive to processor models aimed at high-performance computing applications: a growing number of mainstream processor models intended for cloud, hosting, and enterprise workload consolidation can have these demanding thermal requirements. The favorable economics of higher-performance servers (including their energy efficiency across an array of applications) generates demand for powerful processors.

Although these TDPs and power excursion levels are still manageable with air when using high-performance heat sinks (at the cost of rack higher-performance density because of very large heat sinks, and lots of fan power), peak performance levels will start to slip out of reach for standard air cooling in the coming years. Server processor development roadmaps call for even more powerful processor models in the coming years, probably reaching 600 W in thermal excursion power by the mid-2020s.

As processor power escalates and temperature limits grow more restrictive, even DLC temperature choices will be a growing trade-off dilemma as data center and IT infrastructure operators try to balance capital costs, cooling performance, energy efficiency, and sustainability credentials. Inevitably, the relationship between data center cooling, server performance, and overall IT efficiency will demand more attention.

Get a monthly roundup of Power & Cooling news, direct to your inbox.