The data center sector ought to have a good story to tell on sustainability, but instead it faces censure. Globally, the industry is estimated to consume between 200 terawatt hours (TWh) and 500 TWh of electrical energy every year. That means that, annually, it uses between about one per cent to three per cent of global power output.
And yet, over the past 30 years, the number of data centers, not to mention overall compute capacity, has boomed while overall power consumption and CO2 output has increased, but at a considerably lower pace. “We have had a five, six, seven hundred percent growth in terms of data center capacity, but we've had a very minor increase in terms of energy consumption as a result,” says Niall Killeen, global director of building commissioning at CAI1.
“In the European space, the data I’ve seen shows a massive increase in the volume of data centers, but energy consumption has only risen by three or four per cent,” he adds.
While the industry has been characterised as energy hungry, it has nevertheless had a focus on driving down power consumption for the past two decades or so, even before the focus on sustainability today – albeit not necessarily purely for environmental reasons. “Energy is cost, so there was always a big drive to try and be more effective, more efficient, to reduce costs, while delivering an equivalent service,” says Killeen.
Harry Benson, global director of human performance at CAI and an industry veteran, traces back the industry’s focus on power consumption to the fossil fuel energy price spikes from around 1999. “That’s when we really started really talking about reducing the energy cost per per kilowatt or megawatt. Then, around 2006, maybe a little earlier, the emphasis started to shift towards sustainability, alongside energy costs,” says Benson.
At the same time, suppliers were also pulled-in to this effort, in many respects by the demands of big-buying hyperscalers and other major IT buyers, suggests Uptime Institute researcher David Mytton.
He notes that the rise of cloud computing, in particular, has enabled “efficiency improvements to take place at a huge scale”. These efficiency improvements, he adds, have occurred across the board: with CPUs, servers, conventional storage, SSDs, as well as every significant item of data center infrastructure improved in efficiency, enabling the power usage efficiency (PUE) figures of each new facility to be driven ever-closer to one.
Google, for example, claims that businesses using Gmail have decreased the environmental impact of running a corporate email service by up to 98 percent compared to the ‘old way’ of doing email [PDF] – using locally hosted servers running Microsoft Exchange or SendMail, 24 hours a day, seven days a week.
Hence, in many respects, data centers are running corporate IT more efficiently than when it was run entirely in-house; in many organizations, from a server under an engineer’s desk or in a poorly ventilated server room that was never really designed for running computers efficiently.
Nevertheless, this is no longer good enough. In the European Union, tougher data center efficiency regulations are coming down the track, and a number of big-name data center operators have even committed to achieving net-zero carbon emissions; some within less than a decade. And if they can do it, regulators will argue, everyone else should be able to as well.
End of the road for PUE?
Power usage effectiveness (PUE) was first introduced in 2006 just, as Benson noted, the focus on energy efficiency was starting to evolve into a broader emphasis on sustainability. PUE measures the ratio of energy used by the IT equipment in the data center to the energy used by the entire data center. The closer to one, the more efficient the data center.
It only became a global standard in 2016 (ISO/IEC 30134-2:2016), but just five years later, questions are increasingly being raised about whether the tool is about to outlive its usefulness.
Today, the most power efficient data centers, in terms of PUE, are the hyperscale showpieces built by Google, Facebook, Microsoft and Amazon, each trying to out-do the other on environmental terms. Google, for example, claims to have squeezed the PUE for its entire fleet of data centers down from an average of 1.22 in 2008 to just 1.10 by 2021. One EU-funded research project even claims to have achieved a PUE of 1.0148 – albeit in a location (Boden, Sweden) just 50 miles from the Arctic Circle, where it can enjoy free-air cooling all year round.
But the rest of the industry, not surprisingly, lags a long way behind.
Last year, the average data center PUE weighed-in at 1.59, according to the Uptime Institute’s annual survey of global data centers of between one and 60 megawatts. This is a long way behind industry leaders like Google and Facebook, and only marginally better than seven years ago, says Uptime Institute executive director Andy Lawrence.
There are a number of reasons for this, he says. First, the easiest (and cheapest) efficiency measures were taken between around 2007 and 2013, such as maximising data center server halls for cooling. This slashed average PUEs from 2.5 to 1.65. Since then, improvements have required ever-bigger financial investments to make ever-more marginal gains.
Second, the real-world power efficiency of a data center at opening is very much a design optimum, and might not reflect its actual usage until it has scaled up and is fully occupied. Indeed, the real-world power efficiency of the data center will vary according to the electrical load, and it will have a certain fixed load it will draw even if all the IT equipment in the server hall were switched off.
Then, years down the line, when the cooling infrastructure is lined with fine layer of limescale, IT equipment has been replaced several times and the server halls have been re-organised twice over, the PUE (if it were to be re-assessed) might not be so good.
“We're typically building the fabric of the building, the electrical mechanical infrastructure for a 20 to 30 year lifespan. The servers that sit within the heart of those buildings might have a three year lifespan, at most, running in the most optimal way,” says Killeen. They won’t just be worn out from being pounded 24 hours a day, seven days a week, but technology will have moved on in any case.
“Obviously, you don't rebuild the building, but you might have to introduce additional cooling if your energy profile or energy density has increased; you might find that the cooling requirement has increased and, therefore, you have to upgrade some elements of the cooling infrastructure.”
Furthermore, the moment the data center is opened engineers have a habit of turning the air conditioning right down, in contravention of the latest recommendations, in the belief that the servers will be better looked after at much lower temperatures.
“As is often the case with capital project focus and funding in any industry, the emphasis is on construction speed. But even ‘lights out’ data centers require qualified operators and technicians to manage and maintain continuous operations amid the inevitable changes and issues over the life of the site,” says Benson.
He continues: “From day two of operations, IT refreshes, load expansion, heat-load shifting, and other changes begin to affect the site’s efficiency and pose a threat to performance, as designed. People in operations must therefore be well-equipped to do their job, which requires more structured role qualification than the industry typically invests in, along with training on technology and best practices to avoid deterioration of site performance. This approach also works to control staff turnover, which is no small issue in the current talent-strained conditions.”
Focusing solely on PUE can generate some perverse incentives whereby configuring power management or shutting down unnecessary or less efficient compute equipment (say, overnight) could increase a power usage efficiency figure, even though it means that the data center is consuming less power.
“It can be a bit like the well-established quality standards. You can have a horribly bureaucratic and terribly run organization, but you can still achieve the coveted ISO 9000 standard because you document how terrible it is,” says Nick Armstrong, global director, asset management & reliability at CAI. Often, he adds, “standards are written for the lowest common denominator. More than 120 countries have to sign-off on them so you can’t necessarily write EU standards for Zimbabwe, it just wouldn’t work.”
Finally, the best PUEs are typically filed by the hyperscalers who have the resources to invest in shaving off every last 0.01 from the PUE of their latest data center. That even includes developing their own specialist chips, servers and other IT hardware for particular tasks. For example, ARM-based servers are especially efficient for the comparatively simple task of serving web pages. Hence, for companies like Google and Facebook, an investment in specialist servers can pay off handsomely, but that isn’t an option open to everyone.
In other words, PUE might have provided a focus for sustainability and helped move things in the right direction, but on its own it is a far from perfect guide.
“To me, rather than looking for a replacement metric, you would use PUE in combination with a number of other measures, such as sustainability measures, as opposed to throwing PUE aside,” says Benson. It’s unrealistic, he continues, to expect one single measure to govern something as complex and multi-faceted as sustainability or net-zero.
Carbon usage effectiveness is one such measure, continues Benson, given that steel and concrete – two of the most common building materials – are both produced using carbon-intensive processes. Data centers can scarcely be built out of wattle and daub.
Ultimately, the industry can talk a good game on sustainability, but all the while the incentives and KPIs primarily relate to uptime that is what will be prioritised.
“I worked with a manufacturer that brought-in a plant manager who literally said to me, ‘My KPIs get me my bonus, and they say we need to lower our downtime’. So he eliminated all maintenance on all systems for three years because he knew he wasn’t going to be there for longer than three years. He hit his KPIs, but the company paid the price eventually. But he got his bonuses, so that's what counted,” says Armstrong.
In short, if sustainability is a priority for the data center sector, then that will need to be reflected in KPIs and bonuses. Staff will also require better training to impart sustainability aims and processes, suggests Killeen. “Our setpoint for facilities tends to be about 26 degrees celsius or so, yet you'll find operators that will come in and reduce that straightaway to 21, or 20 degrees, or even lower.
“A data center server hall is not intended to be a habitable space. It doesn’t have to be run at those low temperatures just because an engineer is going to wander through once or twice a day. Servers are very happy sitting at 30 degrees, maybe 35 degrees, according to the ASHRAE guidelines,” he says.
Step by step
Greater automation can help, adds Armstrong. Predictive modelling and other machine-learning tools could be deployed (or integrated into, or with, data center management systems) to maintain data center systems at their most efficient state – providing insight into optimal sustainability outputs, rather than focusing purely on measures.
The upcoming DCD Trends in Data Center Automation survey indicates that the sector still has a lot to do in terms of implementing automation technologies, with much of the industry unconvinced by the merits of AI and machine learning, not to mention software-defined power. However, the survey also indicated that the sector might be on the cusp of widespread adoption of robotics over the next decade or so, largely with robots and robotic devices working alongside staff, rather than replacing them by performing complex tasks independently.
On top of that, the need for 99.999 percent uptime (or better) also means there’s a tremendous amount of duplication in the form of redundancy built-in to many data center architectures in order to provide that high level of resilience. Perhaps, says Killeen, organizations need to rethink such over-engineering.
The needs of a data center providing services to investment banks and stock market traders, for example, will be very different from one serving web pages, running online multiplayer games or streaming movies to couch potatoes.
“We frequently talk about this, the differences between redundancy versus reliability. I've had clients tell me ‘I don't need reliability, because I have redundancy’. That means you’re spending twice as much money to achieve something you didn't really need to achieve. So, redundancy comes with a significant cost, and people forget that the carbon impact of redundancy is significant,” says Armstrong. Indeed, without reliability, he suggests, how can you also be confident that the backup system will be there when you need it?
The best data center buildings, adds Armstrong, are built like Lego bricks. “If I have a system that does go down, that catastrophically fails, I can unplug it and plug a new one in with little or no impact on operations.
“I recently worked with a client that put in an HVAC system to serve an area that would normally be designed for three segregated systems. Instead, they chose a design offering redundant air supply units, thus putting their greatest risk in the system under redundancy, but then spent considerable efforts into reliability of the remainder of the system that had no redundancy.
“What they could have done instead is install one set of duct-work, one set of controls and put two or three air handlers on a central manifold, so that if one goes down they’re okay. If an HVAC system goes down in their design, an entire section of their building goes down.”
Ultimately, says Benson, data centers need to be designed from the outset with the idea that they will evolve over time, and therefore be adaptable accordingly – especially in terms of providing efficient cooling.
“It’s really about ‘moves, adds, and changes’. You have to foresee that they’re going to occur for many reasons, and your physical infrastructure has to have flexibility to remove that heat, to dial-up or down the heat removal and the power delivery. A flexible design that’s operator friendly and operations friendly will make a huge difference in how well you can manage that problem,” he says.
The industry, says Killeen, is also looking to apply more imaginative approaches to power, especially for well-established data center operators running with vastly more capacity than they need.
“Clients are now also increasingly investing in what’s called ‘energy harvesting’ or ‘power harvesting’. This has become reasonably popular, where they have found out over time that they have over-specced the capability of what they have constructed.
“So they’re typically building in two megawatt chunks and they have scaled accordingly to suit that two megawatt chunk. But they have found that they have maybe 200 kilowatts of additional capacity in a number of these spaces.
“Rather than waste the power, they can afford to increase the rack density to better use that power and harvest it and be more effective and more efficient. It increases the revenue potential, and it also more effectively uses the power they've got,” says Killeen.
And that’s not the only in which data center operators can both boost efficiency and take a step towards achieving sustainability goals.
Gas-backup generators – in place of diesel – can also help, with an increasing number of ‘grid balancing schemes’ opening up enabling operators to sell power back to the grid to help out when renewable energy sources fall short.
Indeed, the range of potential approaches to the issue of sustainability will require data center operators to consider their options very closely in the coming years as their choices could make a big difference, not just to clients and shareholders, but to the environment as well.
Two provinces and two cities have written submarine facilities into their five-year plans
Expects to hit the goal by the next year
As the US drought gets worse