When electricity started to gain traction around 100 years ago, power companies sought to put power stations as close to their users as possible to ensure reliability of supply. But if you lived out in the sticks, a long way away from the nearest city, it was best to keep candles and flashlights close to hand as, the further away you were, the less reliable the supply.
It wasn’t necessarily terrific in the city, either, which might be served by several oil or coal-fired power stations. Both laundry left out to dry, and people’s lungs, could quickly be turned grey as a result.
In time, though, the utility companies optimized their networks by locating power stations further away from major population centers but ensuring reliable supply nationwide with sub-stations in neighborhoods, up close to customers.
The data center sector is starting to undergo a similar revolution, believes Paul Quigley, president of AIRSYS Cooling Technologies, with hyperscale and major colocation facilities providing the backbone processing power, while Edge or micro data centers serve the role of the sub-station, bringing reliable, ultra-low-latency compute close to the end users, where it is most needed.
Moreover, adds Quigley, just like the equipment in electricity sub-stations, these Edge data centers will be standardized, and modular in construction in order to make them easier to install and manage, but manufactured to customer specifications and delivered in one piece to sites. They will arrive all tested and ready to go – after the site is prepped and the modular data center delivered, all the customers need do essentially is plug them in.
Indeed, the Open Compute Project has already developed a set of industry standards for modular data centers covering size, weight, rack dimensions, and power distribution and cooling in order to drive consistency, integration, and interoperability and, hence, adoption.
“They are going to be everywhere. You’ll find them in hotels, hospitals, banks, and even highly secured military facilities,” says Quigley.
Hospitals that need rapid and reliable access to medical records are increasingly conducting remote surgery where low latency will be critical. Mobile network operations, too, will roll out Edge data centers to provide processing power at their mobile base stations to support low-latency compute-intensive applications, such as autonomous vehicles.
For the military – and many other users – one of the attractions of modular is the ease with which facilities can be moved from one site to another, according to need.
The war in Ukraine provides an insight into how they might be used at this particularly sharp end in the future, with military authorities in Ukraine (with the assistance of allies) analyzing terabytes of data every day taken from drones, satellites, and other tools to shape strategy on a minute-by-minute basis.
“When we talk about a modular data center, that means the entire facility is installed or built in a factory: everything from the servers, the power, the UPS, the cooling – everything,” says Quigley. The standard 40-foot container is the typical form factor – not for exporting overseas but loading onto the back of a trailer to be transported to its site by road.
“It should already have been started, run-tested and commissioned before it leaves the factory so that it can be up and running when it’s dropped off at its location: just connect the power lines and flip the switch, to put it simply,” he says.
The rise of modularization is backed up by solid research. The market is forecast to double between 2020 and 2025 – from around $18.4bn to $37.8bn, according to a report by market research firm MarketsandMarkets, a compound annual growth rate (CAGR) of more than 15 percent.
Among the telecoms carriers that Quigley has been talking to recently, growth looks to be even faster; as much as 25 percent CAGR. According to Quigley, thousands of modular data centers are now being planned every year in the US alone for the foreseeable future.
But the challenge for modular data center builders is the same as for conventional data centers: to ensure that construction, maintenance, and operations are as efficient as possible, and one of the key areas that can be achieved is in the cooling.
Cooling conundrum
The cooling system presents a particular conundrum for modular data center builders. On the one hand, with the typical level of power consumed by cooling accounting for anywhere between 20 percent to 50 percent of the total, it’s the biggest environmental, power, and therefore financial running cost after the data center compute.
On the other, an enclosed space like a modular data center, perhaps located somewhere highly sub-optimal (such as at a mobile base station in a field) needs a cooling system that will ensure that all incoming air is filtered, that any detritus from outside is kept outside, and that can be proactively managed and maintained remotely, as far as possible.
In other words, cooling is the single biggest determinant of a modular data center’s health and one of the largest elements of running costs, and will invariably be used in locations with minimal, if any, on-site maintenance staffing. If it goes down, so will the facility – with autonomous vehicles, hospital operations, or maybe even military operations all hanging in the balance as a result.
“Modular data centers have very specific spatial and power requirements, particularly in terms of cooling,” says Quigley. “Due to their small size and compact design, modular data centers often require high-density computing equipment that generates a significant amount of heat. This can lead to challenges in managing the temperature and humidity levels inside the data center, affecting both the performance and the reliability of the equipment.”
Not only does the cooling system need to fit into the modular space, but it must also be scalable for future growth, expansion, and load diversification. “The overall design of a modular data center must take into account these challenging spatial and power demands to ensure that the data center can operate efficiently, reliably, and safely,” says Quigley.
The typical modular data center design involves hot and cold aisles, with the server racks fitted down the middle. In the enclosed space of a container, the effect of pressurization (whether too much or too little) on the efficient running of the fans also needs to be taken into account.
In the interests of efficiency, Airsys cooling systems, including the Unicool-Edge™ designed specifically for modular data centers, use free-air cooling in the first instance – provided it is suitable, says Quigley. “Data centers are almost like clean rooms. They're hypersensitive to dust and particles and any kind of contaminant. So every one of our units has a laser particle counter on it and is constantly sampling the air.
“The first test, therefore, is whether the incoming air temperature is productive and suitable for cooling; if it is, the question is, is it clean enough? If it’s still safe, then we’ll go ahead and open it up, but the laser sensor is still working away. If a farmer drives by with a trailer full of freshly harvested hay, it can be shut off before loads hit the air filters. The air quality is then tested on regular intervals to determine when it is suitable to resume providing free-air cooling.
“Moreover, in every single one of our cooling systems, we use a pressure sensor on either side of the filter, a pressure sensor that is constantly calculating how close the filter is to reaching capacity, monitoring it, and sending data back so that predictive maintenance can be performed weeks in advance. We also utilize transducers in the refrigerant line to begin sending alerts when the condenser coil is getting dirty and the unit is not operating at its peak operating efficiency.
“After all, a filter can only be as effective as the maintenance person who is supposed to be taking care of it,” says Quigley.
At the same time, he adds, AIRSYS migrated to variable speed compressors more than a decade ago, making its cooling systems between 30 percent and 70 percent more efficient than systems with standard and staged PSC compressors that frequently cycle between ‘on’ and ‘off’.
What this means is that the Unicool-Edge, just like every other cooling system in the AIRSYS line-up, can take outside air with only five degrees Fahrenheit of difference and the compressor can ramp up as the temperature outside warms up. Typical PSC air conditioners stop providing free-cooling at 20-25 degrees delta. Likewise, AIRSYS cooling can simply turn the compressor down when it is least required, prolonging the life of the compressor and improving reliability compared to systems constantly turning on and off.
The Unicool-Edge, meanwhile, is not only factory fitted to the side of the modular facility, so that none of the expensive ‘real estate’ inside is wasted, but is stackable: If the facility needs more cooling grunt for any reason, up to four units can be fitted, one on top of the other, precisely to the width of the facility, meaning up to eight per data center if both ends of the building are utilized. Allied to the variable-speed compressor, says Quigley, this arrangement can provide eight ‘units’ of cooling from just one unit of power.
Made to measure
Remarkably, perhaps, despite the fast growth of the modular data center market, the cooling solutions typically deployed are anything but made to measure. Instead, they’re invariably off-the-shelf cooling systems originally designed for conventional facilities, and therefore poorly suited to modular data centers.
That means that they are over-specified and oversized, and consume much greater levels of power to provide sub-optimal cooling. Very few are capable of efficiently cooling high-density loads, and none can scale as easily as the Unicool-Edge.
“Off-the-shelf cooling systems may be less expensive to buy, but they rarely fit perfectly with the modular concept and sacrifice efficiency, high-density cooling effectiveness, and durability,” says Quigley. Moreover, any small upfront cost savings are quickly subsumed in higher deployment, power consumption, and maintenance costs.
After all, anyone relying on the modular data center in their hospital or at a 5G base station for an essential service will be relying on it to work flawlessly 100 percent of the time, just as we rely on the electricity always being on when we need it at any time today.
To find out more about cooling solutions made-to-measure for modular data centers, go to AIRSYS' dedicated website.
More from Airsys
-
Three solutions to major cooling technology challenges
Global leaders guide us through their answers, from a re-balancing of CAPEX to OPEX, to the increased role of plastic and tech to alleviate water consumption concerns
-
Sponsored Interview: Airsys CEO Yunshui Chen on the changing cooling landscape
Airsys’ latest development is what it calls ‘cooling-as-a-service’ - data center cooling without the upfront capital expenditure
-
Sponsored Is cooling-as-a-service about to take off?
If you can buy a jet engine ‘as-a-service’, why not data center cooling? asks Airsys USA president Paul Quigley