While every facet of data center management is changing at a rapid pace, operating budgets rarely keep up. Data volume doubles every 18 months and applications every two years; in contrast, operating budgets take eight years to double (IDC Directions, 2014). 

IT has always been asked to do more with less, but the dynamic nature of the data center has been accelerating in recent years. Smart devices, big data, virtualization, and the cloud continue to change service delivery models and elevate the importance of flexibility, elasticity, and scalability.

Every facet of data center management, as a result, has been complicated by an incredibly rapid rate of change. Thousands of devices move on and off intranets. Fluid pools of compute resources are automatically allocated. Does this ultra-dynamic environment make it impossible for IT and facilities management teams to identify under-utilized and over-stressed resources?

If so, energy consumption in the data center will continue to skyrocket. And data centers already consume 10 percent of all energy produced around the globe, according to recent Natural Resources Defense Council reports.)

Fortunately, IT is far from powerless even within these challenging data center conditions.

Rackspace Crawley
Power feeds  – Rackspace

Discovering some secret weapons

Ironically, in today’s data centers consisting of software-defined resources, the secret weapon for curbing energy costs lies in the hardware. Rack and blade servers, switches, power distribution units, and many other data center devices provide a wealth of power and temperature information during operation. Data center scale and the diversity of the hardware make it too cumbersome to manually collect and apply this information, which has led to a growing ecosystem of energy management solution providers.

Data center managers, as a result, have many choices today. They can take advantage of a management console that integrates energy management, have an integrator add energy management middleware to an existing management console, or independently deploy an energy management middleware solution to gain the necessary capabilities.

Regardless of the deployment option, a holistic energy management solution allows IT and facilities teams to view, log, and analyze energy and temperature behaviors throughout the data center. Automatically collected and aggregated power and thermal data can drive graphical maps of each room in a data center, and data can be analyzed to identify trends and understand workloads and other variables.

Visibility and the ability to log energy information equips data center managers to answer basic questions about consumption, and make better decisions relating to data center planning and optimization efforts.

Best-in-class energy management solutions take optimization to a higher level by combining automated monitoring and logging with real-time control capabilities. For example, thresholds can be set to cap power for certain servers or racks at appropriate times or when conditions warrant. Servers that are idle for longer than a specified time can be put into power-conserving sleep modes. Power can be allocated based on business priorities, or to extend the life of back-up power during an outage. Server clock rates can even be adjusted dynamically to lower power consumption without negatively impacting service levels or application performance.

Energy-conscious data centers take advantage of these capabilities to meet a broad range of operating objectives including accurate capacity planning, operating cost reduction, extending the life of data center equipment, and compliance with “green” initiatives.

Common uses and proven results

Customer deployments highlight several common motivations, and provide insights in terms of the types and scale of results that can be achieved with a holistic energy management solution and associated best practices.

  • Power monitoring. Identifying and understanding peak periods of power use motivate many companies to introduce an energy management solution. The insights gained have allowed customers to reduce usage by more than 15 percent during peak hours, and to reduce monthly data center utility bills even as demands for power during peak periods goes up. Power monitoring is also being applied to accurately charge co-location and other service users.
  • Increasing rack densities. Floor space is another limiting factor for scaling up many data centers. Without real-time information, static provisioning has traditionally relied upon power supply ratings or derated levels based on lab measurements. Real-time power monitoring typically proves that the actual power draw comes in much lower. With the addition of monitoring and power capping, data centers can more aggressively provision racks and drive up densities by 60 to more than 80 percent within the same power envelope.
  • Identifying idle or under-used servers. “Ghost” servers draw as much as half of the power used during peak workloads. Energy management solutions have shown that 10 to 15 percent of servers fall into this category at any point in time, and help data center managers better consolidate and virtualize to avoid this wasted energy and space.
  • Early identification of potential failures. Besides monitoring and automatically generating alerts for dangerous thermal hot spots, power monitoring and controls can extend UPS uptime by up to 15 percent and prolong business continuity by up to 25 percent during power outages.
  • Advanced thermal control. Real-time thermal data collection can drive intuitive heat maps of the data center without adding expensive thermal sensors. Thermal maps can be used to dramatically improve oversight and fine-grained monitoring (from floor level to device level). The maps also improve capacity planning, and help avoid under- and over-cooling. With the improved visibility and threshold setting, data center managers can also confidently increase ambient operating temperatures. Every one-degree increase translates to 5 to 10 percent savings in cooling costs.
  • Balancing power and performance. Trading off raw processor speed for smarter processor design has allowed data centers to decrease power by 15 to 25 percent with little or no impact on performance.

Time to get serious about power

Bottom line, data center hardware still matters. The constantly evolving software approaches for mapping resources to applications and services calls for real-time, fine-grained monitoring of the hardware. Energy management solutions make it possible to introduce this monitoring, along with power and thermal knobs that put IT and facilities in control of energy resources that already account for the largest line item on the operating budget.

Software and middleware solutions that allow data center managers to keep their eyes on the hardware and the environmental conditions let automation move ahead full speed, safely, and affordably – without skyrocketing utility bills. Power-aware VM migration and job scheduling should be the standard practice in today’s power-hungry data centers.

Jeff Klaus is the general manager of Data Center Manager (DCM) Solutions, at Intel Corporation.



[1]