Cookie policy: This site uses cookies (small files stored on your computer) to simplify and improve your experience of this website. Cookies are small text files stored on the device you are using to access this website. For more information on how we use and manage cookies please take a look at our privacy and cookie policies. Some parts of the site may not work properly if you choose not to accept cookies.


Data center standards need an update

Data centers are notorious power hogs, and responsible for two percent of the total greenhouse gas emissions produced today. The latest figures suggest that the power they use is coming under control, but there is more to be done. Using alternative energy sources could help data center operators run emissions-free facilities, creating a more sustainable industry overall.

Such energy alternatives are already available today. Apple is running its own solar farms, while Google and Microsoft have both proposed using a rechargeable battery in each server as a distributed uninterruptible power supply (UPS) So, then why aren’t data center operators using them, choosing, instead, to still rely on decades-old, inefficient diesel generators to power their facilities?

C2750D5B diesel genset

Are diesels outdated?

Source: Cummins

Solving old problems 

The problem is that the current data center classifications that most of these facilities have to abide by are very outdated. The current standards don’t take into account the past two decades of energy innovation, let alone reward facilities for sustainability advantages. And, as such, data center operators have to abandon otherwise cost- and energy-saving power overhauls to avoid paying penalties for noncompliance.

The entire data center industry is long overdue for a discussion on how existing classification standards can support out-of-the-box designs that would help it evolve. This is especially necessary as data centers continue to power an increasingly digital world and global economy. Guidelines need to be set for new classifications that can be applied alongside current standards to recognize more energy efficient power types, without penalties.

Existing standards only speak to legacy functionality

The current data center standards, such as the Uptime Tier classifications, and the EN50600 scheme, promote fixed-availability classes and prescribed redundancy measures. These may have made sense a couple of decades ago when there weren’t any reasonable energy alternatives. But, that’s no longer the case, and, as such, the existing standards don’t even begin to recognize the demands on today’s modern data center.

Each of the standards are built on four progressive classes for redundant diesel generators and UPSs. Ranked for performance and uptime, each class incorporates the requirements of the previous class:

  • Basic non redundant: capacity requirements for a dedicated data center site
  • Basic redundant: capacity components that increase data center availability
  • Concurrent maintainable: increased level of redundancy, which enables the data center subsystems to continue operating while parts of the power and cooling equipment are being replaced or maintained
  • Fault-tolerant: data center with fully redundant subsystems

These four criteria only really speak to legacy functionality though. And, when considering that the average U.S. data center is 20+ years old, that’s a span of time that has seen energy innovation explode across industries. In fact, many organizations are planning to close their private facilities for more advanced colocation centers. However, these new facilities are like putting a round peg into a square hole; they simply don’t fit in with the existing classifications.

Old standards won’t support a new, data-driven economy

We now live in a highly digital world. Every day, there are 2.5 quintillion bytes of data created, which is a much higher volume than ever before. In fact, there has been such a recent influx of information development that 90 percent of data in the world today was created in the last two years alone. With so much data being exchanged, the need for storage has exploded, prompting data centers to expand at a breakneck pace.

Another shift within the industry is the fact that a growing proportion of compute and storage capacity is now located in commercial data centers rather than private or corporate facilities. This is a result of the growth of hybrid and public cloud architectures, which puts a lot of pressure on commercial data center operators to focus on interconnectivity and to balance availability against sustainability.

Both the growing volume of data and the increased compute and storage capacity for modern data centers require sustainable energy sources to handle larger-than-ever loads. Yet, despite research that shows how many of the new, innovative designs can deliver the same uptime as legacy architectures, they still simply can’t be classified, limiting the capabilities of many colocation facilities.

The conversation has thankfully begun

Luckily, the conversation around innovating data center technology seems to be heating up. Recently, Google announced that it would join Facebook’s Open Compute Project to share data center designs with tech companies across the globe. Despite the good will and support behind this innovation sharing though, the fact remains that classifications don’t do enough to recognize innovative data center designs.

The current standards don’t take into account the past two decades of energy innovation, let alone reward facilities for sustainability advantages

The Green Grid, a non-profit industry consortium that works to improve IT and data center resource efficiency globally, has also been a vocal partner in advocating for the addition of more inclusive industry standards. The group recently hosted the 2016 Green Grid Forum to bring leaders from the information and communications tech industries together to discuss improvements that can be implemented on the short and long term to achieve efficiency within the data center.

Beyond Google and the Green Grid’s efforts, a new, complementary set of guidelines that take innovation into account can do far more than just improve energy efficiency, but also help data centers perform better and deliver an improved user experience to their clients. Once everyone across the industry is on the same page, it’s only a matter of time before more classification standards are developed to help the industry keep pace with evolving energy needs.

Read more in this paper

By Lex Coors is chief data center technology & engineering officer at Interxion 

Related images

  • The Need for a New Data Center Design Standard - image

Readers' comments (1)

  • Wise words, but I've lost count of the number of times I have seen them written. Every year, a number of bloggers state that we need new standards and the industry to recognise innovation, but the truth is, we have yet to get our house in order against existing designs, let alone any future ones. Uptime Institute Tiering was superceded many years ago as businesses began to realise that 'uptime' is not defined by the Data Centre alone, software also has a large part to play. EN50600 is a valiant attempt to bring international agreement into this space and should be welcomed and built upon.

    If what you are really calling for, is an acceptable standard for colocation facilities that recognises and therefore encourages innovation, then that is purely down to the power of your Sales and Marketing departments to sell a different risk profile to your customers and does not need to involve enterprise data centres. EDCs are and always have been different beasts altogether and need to be based on a solid IT Strategy and interlinking Data Centre Strategy. This will define the level of business risk, and consequently, the level of innovation organisations will be prepared to accept within the Data Centre space.

    Unsuitable or offensive? Report this comment

Have your say

Please view our terms and conditions before submitting your comment.



More link