Data center efficiency: modeling v. metering

 

LBNL puts Romonet’s modeling software to the test

24 January 2013 by Yevgeniy Sverdlik - DatacenterDynamics

Data center efficiency: modeling v. metering
Target chamber of a massive laser at Lawrence Livermore National Laboratory. Credit: LLNL

In 2011, UK data center infrastructure management (DCIM) outfit Romonet brought its software suite to the US market. Instead of installing a series of sensors and monitoring tools to track performance of the infrastructure, the company’s product uses information about equipment inside the data center and its configuration to create a model of the facility’s environment.

Recently, building energy efficiency researchers at the US Department of Energy’s Lawrence Berkeley National Laboratory (LBNL) in Berkeley, California, conducted a study to test Romonet’s novel approach to data center management and planning. They conducted the test at a data center of a sister lab called Lawrence Livermore National Laboratory (LLNL) that needed a facelift.

LBNL’s goal was to evaluate effectiveness of the software suite. For LLNL, this was not just a study but also a way to generate action items to increase its data center’s energy efficiency.

Bill Tschudi, a member of the LBNL team charged with transferring new energy-efficient-building technologies to the real world, says the tool was interesting because of the simplicity of its use, compared to other tools on the market. “There’s all kinds of complicated models you can build with all kinds of spreadsheets and all kinds of other analysis tools,” Tschudi says.

Using Romonet’s software was simple and straight-forward. It took only a few days to build the model and another few days to verify it. The software has a library of devices and their characteristics, so to build a model, the data center operator needs to simply choose the devices they have in their facility. “To me, that’s a huge advantage over other people,” Tschudi says.

The test was successful. Aside from a few “minor discrepancies” between readings, Romonet’s overall readings were accurate, according to Tschudi. The data center’s operators walked away from the study with a to-do list and data they could use to justify funding for those improvements. As an example, one of the things LLNL learned about its data center was that the power readings that were taken at the uninterruptible power supply (UPS) output were off. Power meters showed more power coming out of the UPS than power going in. Come next maintenance cycle, the operators plan to address the issue with their UPS vendor.

The building
For the test to work, the researchers needed a data center that was already well instrumented to compare results from the Romonet model with meter readings, and LLNL’s facility in Livermore fit the profile. Although it did not have monitoring for all areas of the building, the set-up was good enough, Tschudi says.

The building housing the 15,500 sq ft data center is nearly 50 years old. The data center’s UPS systems are set up in 2N configuration, using 1,000kVA modules.

There are 20 power distribution units (PDUs) on the raised floor and 2N+N critical power distribution. Its total IT capacity is 630kW (2N) and its load at the time of the study was 325kW (2N). It is cooled by an N+1 chilled-water system, with computer room air handlers (CRAHs) and air handling units (AHUs) feeding under floor with constant-speed fans.

It took a total of 16 man days to conduct the modeling project, including the time it took to build and verify the model before the team could start using it to predict effects of changes to the infrastructure.

What if?
“Once we knew that the model was doing a pretty good job of matching the actual conditions, then we started playing some ‘what if?’ scenarios,” Tschudi says.

Researchers used Romonet’s software to investigate three such scenarios: increasing IT load from 332kW to 1,110kW, adding variable-speed fan controls to air handlers and adding a waterside economizer. “In a few minutes of time you can change the model and then recalculate to see what the energy savings might be for those scenarios,” Tschudi says.

In all three scenarios, the facility’s average annual power usage effectiveness (PUE) would go down dramatically, according to results returned by the program.

The increase of IT load would result in PUE going from the baseline of 2.16 down to 1.62. Adding variable-speed capabilities to the air handlers would bring it down to 1.56, and installing a waterside economizer would result in the best improvement in PUE: down to 1.39.

The software also generated annual energy cost savings that would result from each of the improvements and the segment of time it would take for each improvement to pay for itself. Payback time for the waterside-economizer (the largest investment by far) would be the longest: eight years to recoup the US$1.7m the improvement project would cost. The best return on investment (ROI) would result from simply adding IT load, the study concluded.

Public funding
Which products on the market LBNL is going to test depends on a variety of factors, not the least of which is timing.

The lab’s scientists do not study the market to determine the most important technology to test and demonstrate, Tschudi says. These studies are funded in part by the California Energy Commission, while the end-user facilities contribute time and resources.

This article first appeared in the 27th issue of DatacenterDynamics FOCUS magazine. Get a free subscription on our website.

CONNECT WITH US

Sign in


Forgotten Password?

Create MyDCD account

Regions

region LATAM y España North America Europe Em Português Middle East Africa Asia Pacific