Archived Content

The following content is from an older version of this website, and may not display correctly.
Five years ago, large organizations were just starting to become aware of the green agenda. Recycling and cutting their carbon footprint could improve their image, they realized. They gave the responsibility to their corporate social responsibility (CSR) officers and prepared to invest.

Then the financial crash hit. There was no budget for greenwash. Cuts were needed. But this didn’t’ kill the green shoots: cutting energy use in data centers also cut energy costs, so Green IT lived on in the movement to improve data center efficiency.

The chief enabling technologies became improved cooling, virtualization and the cloud – all of which let companies consolidate their hardware, increase its utilization and cut waste. The Green Grid, and the European Code of Conduct, laid out the best ways to run a data center without wasting energy.

During that same period, environmental activists noticed the growth of the cloud. Huge data centers were popping up all over, belonging to Google, Amazon, Apple, Microsoft and Yahoo.

Greenpeace laid into Apple – and then Amazon – accusing them of using non-renewables to fuel their mushrooming data centers. The big firms, to a greater or lesser extent, responded.

But what is the reality behind this account of the history of green data centers? Let’s take a look at three myths:

Myth 1  Cloud players are the worst offenders
With environmentalists directing their anger at the big cloud players, you might think they are the ones with the highest carbon footprint. In fact, this is not the case.


There is a widely-circulated estimate that data centers use about two percent of the electrical power of the US, and the figure is thought to be similar elsewhere. The National Resources Defense Council (NRDC) broke down that amount of power used within different sectors of the industry, and its pie chart tells an interesting story.

Google, Amazon, Facebook, Apple and that crowd, together make up the “hyper-scale  cloud computing” sector, which consumes less than five percent of the power used by US data centers.


The biggest power users are the small-to-medium sized data organizations’ data centers, which burn about half the power used by US data centers (which makes about one percent of America’s total consumption, remember).

The next biggest section is the enterprise data centers, with about a quarter of the total, and next down is the colocation sector, which consumes about one fifth of the total.

The only sector smaller than the cloud players is the supercomputer sector or high-performance computing (HPC), where a tiny number of organizations run very power hungry systems.

Why does power use break down this way? Well, whatever the hype says, cloud computing is in its infancy, so Amazon and Google are still scratching the surface of their potential business. In theory, all the enterprise and small business IT in the pie could be theirs.

But it’s also worth noting that the largest sectors are the ones where cost has least impact on electricity use. In large scale sites, electricity can be the major cost, so the operators work hard to optimize it – and that goes for colocation providers too.


Enterprise IT shops will probably have electricity charged back to them, but they are regarded as an essential service, and a cost center, so they get what they want.

In the smaller businesses, the IT department may not even see the power bill, and the business may effectively have no control at all.

 

Myth 2  Data center efficiency boils down to PUE
The first thing any data center operator does to show his or her credentials is to quote a Power Utilization Effectiveness (PUE) figure. It’s a simple ratio - the total power consumed by the site, divided by the amount of power delivered to the IT racks.

It’s a powerful measure, and comes from a volunteer organization, the Green Grid. The best (theoretical) score is 1, where all power reaches the IT equipment and none is wasted. If you score 2 and above (like most of the small and legacy data centers out there) you are spending more power cooling and maintaining than you are doing the actual job.

Any new data center will quote a PUE, and for PR purposes, it had better be less than 1.2 – because the tech media has got a handle on PUE, and will quote it with approval.

But PUE does not address what goes on inside the IT racks, and if you are operating an IT estate, you may well find that is where you can get the most effective improvement on your efficiency.

The US government invited data centers to seek a 20 percent energy cut in its Better Buildings challenge, gave a name-check to PUE, and then said the best way to cut data center energy is “improving the utilization of servers”. This makes sense: “every kW saved with IT equipment can potentially result in nearly 2kW saved in the infrastructure”, the Better Buildings literature said.

Since the Green Grid started to promote PUE, average scores have fallen nicely by something like 20 percent, which does translate to a cut in overall power.

But there’s a bigger and thornier issue here – how do you measure and report the actual usage of your IT equipment, and how do you improve its efficiency? This needs some sort of measure of actual work per kWh, and “actual work” is subjective.

Myth 3 Increased efficiency will cut data center energy use
It’s very easy to assume that an increase in energy efficiency will result in a decrease in the amount of energy used. It seems obvious.

But in the last industrial revolution an economist could William Stanley Jevons noticed that this isn’t always the case. Suppose engineers create a way to burn coal more efficiently in steam engines, Jevons said, in his 1865 book The Coal Question. This could lead to the price of steam power falling – and that might then cause the demand to rise, with the end result that more fuel is burnt.

The so-called “Jevons Paradox” is hotly argued, and easy to misapply, but it’s generally agreed that it works where the demand for a resource is “elastic”, and not if the demand is “inelastic”. The demand for putting cat videos on Youtube is elastic (there’s a potentially infinite number of them) but the demand for data transactions in the government welfare service is inelastic (it just depends on how many benefit payments there are to process).


Whatever the details, and whether a Victorian economist’s ideas apply to the cloud or not, it seems clear that cutting the energy used by a given data center or a given server, isn’t going to affect the overall energy used by data centers, when the number of data centers and servers is growing very rapidly.


In fact, if economics is applied to the cloud, it can lead to miserable conclusions.

If demand can only be cut by increasing the price, then that might mean we need artificial price increases to slow the growth of data centers.

This would then lead to green taxes, and the inevitable question of how they could be applied in a fair and equitable global way.

No simple answer
There’s no obvious conclusion to this. Myths may not be “true” but they can be useful stories that help guide the industry.

Someone needs to keep an eye on the big picture of total data center usage, but in the end, all you can do in the field is to concentrate on improving the efficiency of your own individual sites.