The world has seen increasing demand for digital services in recent decades. This demand has only grown since the pandemic when they were not just a boon but a lifeline for many during times of lockdown and isolation.
As we moved beyond the pandemic, work practices were also changed forever, as more and more sought to work remotely, whether from home or in different geographies.
This combined with developing business models and digital transformation, has seen demand grow even more, as well as new requirements that have seen the likes of Edge computing proliferate.
All of this has driven growth in data centers but has also increased pressure to meet demand. These pressures are seeing space, density, and power come under the spotlight as limitations in some cases.
Scale of growth
To get an idea of the scale of growth in digital services in recent decades, data volume is a key indicator.
According to Statista, since 2010, the volume of data created, captured, and consumed has grown from two zettabytes to 97 zettabytes in 2022, with the figure for 2025 expected to be 181 zettabytes.
Despite this near-exponential growth, according to the International Energy Agency, energy demand since 2010 has only gone from 194 terawatt hours (TWh) to just over 200 TWh in 2022.
These two contrasting figures show the extraordinary strides that have been made in energy efficiency in computing since then, especially when it comes to pure processing power.
With Moore’s Law in effect for the period, the benefits are clear. Now though, there are concerns from no less a figure than the CEO of Nvidia, Jensen Huang, that the Moore’s Law effect may be coming to an end. While this is disputed, there can be little doubt that processors are likely to become ever more powerful, while producing more heat in the process.
To meet that demand towards 2025, and beyond, it is likely that data center limitations will be encountered, with space being chief among them.
Space and power
Space was often seen as one of the chief limitations for data centers. The space was described in terms of the ability to power equipment in a given unit of measure, such as watts per square foot or square meter.
This was a useful rule of thumb for specifications and facility design. Architects would plan cooling and power according to such measures. Under this approach, the data center has progressively been becoming hotter, using more power, to provide for increasing levels of processing.
In an air-cooled data center, this required more and more air pumped through, meaning for every watt drawn, less and less was used for compute. As a result, data centers in the nineties and early 2000s became less and less efficient in terms of how effective that power was in being used to provide data processing.
As the chip power kept going up through a number of different technological developments, and with evermore demand for performance, data center operators found themselves demanding more and more cooling volume and flow, until they hit barriers in cost, complexity, and management.
Many experienced a threshold where they simply could not just pump a room full of air to be able to cool those chips, making it increasingly unfeasible for much of what is already deployed.
Equipment management
Management also became an issue. Often as a data center evolved, equipment was upgraded or altered, moved around, or replaced due to failure. Gaps, spaces, and expansions often meant that even carefully implemented methodologies such as hot aisle/cold aisle systems, were left working poorly, as guidelines for airflow management were often ignored in the need for expediency and demand.
This could add to the impression of space limitations when a new project or service was contemplated, when in fact a facility if properly managed, could take more before reaching the inevitable limit of pumped air cooling.
What is clear from this is that while good management and design are key to ensuring that physical space is not a limitation in meeting demand for digital services from data centers, air cooling is and will be increasingly so in the future.
As other architectures also emerge, such as Edge computing, new cooling solutions will be required if service demand, physical space, energy efficiency, and sustainability needs are to be met.
To meet the emerging demands for digital services in the foreseeable future, data center operators will need to consider hybrids of air, liquid, and direct-to-chip cooling, taking advantage of the specific characteristics of each to appropriately and proportionately provide the kind of cooling that enables density to be deployed reliably and economically.
Inefficient medium
There is a clear reality when it comes to cooling: the closer heat can be captured from where it is produced, the more efficient the process.
Allied to this is the fact that air is a very inefficient medium. A water-based fluid, or another dielectric liquid, is a much more efficient medium to capture and transmit heat.
Even with the likes of hot and cold aisle layouts, with rear doors and in-row coolers, blanking plates, and efficient cabling, the air is still inefficient. While these measures are likely to remain part of the mix for many operators for years to come, other methods must be considered.
Liquid cooling solutions can offer greater capability to accommodate equipment density than air cooling, heat captured through liquid cooling can be more efficiently removed from the immediate environs of the equipment, and brought to potential reuse opportunities, without a state change.
Developments available now in liquid and direct-to-chip cooling can not only meet today’s density demands, relieving physical space limitations, they can also offer a critical upgrade path to allow data center operators to move towards more efficient methods.
This will be crucial as budgets also come under pressure amid the ongoing inflation trends and continuing global uncertainty.
Strengths and purpose
With these new cooling techniques and systems, it is not a one size fits all approach. Each technique and system has its particular strengths and characteristics that must be taken into account ensure the right performance is delivered per requirement.
In-rack, in-row, or direct precision liquid cooling all offer different applications and benefits to achieve an overall density and performance goal. All the while, ensuring efficiency that contributes to sustainability targets.
Data center operators must be supported in their design and operational objectives by a trusted technology partner that not only has in-depth knowledge but also a broad portfolio of solutions to meet each need.
Understanding where better-managed air cooling can remain, liquid cooling can be adopted, and direct-to-chip cooling leveraged, is key to getting current needs under control, while building a path to future capability.
Improvements and a path forward
By properly examining real or perceived data center space limitations, data center operators can determine how best to tackle their density needs. More efficient, precise, and controllable cooling solutions will be a key part of that effort.
With efficiency as a central strand of sustainability efforts, hybrid systems of air, liquid, and direct cooling techniques can build a path to greater effectiveness in data center cooling that relieves space pressures, while meeting demand and providing a strong base for future growth.
A trusted technology partner, with broad knowledge and portfolio resources, can guide operators toward the most informed and appropriate use of these now-proven technologies to achieve their business ambitions sustainably.