Long before there was an Internet of Things, when computers were less than personal, and neither phones nor refrigerators nor cars were expected to be very smart, there was a television ad campaign that hawked a certain oil filter brand.
At four dollars, the premium oil filter cost twice that of its competitors and was perceived as expensive. It was met with great buyer resistance when first introduced to the market. However, there was no question that the filter’s superior technology improved performance, prevented breakdown, and protected your automobile from expensive servicing down the road. Replacing an oil filter was preferable to replacing an engine.
Pay me now, or pay me later
The commercial’s tagline, delivered by an avuncular mechanic, as he slid on a dolly and poked his head out from beneath a full-sized sedan, famously drove home a basic truth about making an initial investment that can lead to significant long-term savings. “You can pay me now, or pay me later.”
The product gradually gained traction and ultimately dominated market share.
The lessons of this pre-digital age marketing story run parallel to some of the false economies and flawed logic surrounding Data Center Infrastructure Management (DCIM) — the major myths of misperception that are holding back more widespread if not complete adoption — and the countervailing truths in favor of its deployment.
Myth 1: DCIM is too expensive
Intel commissioned Redshift Research to query 200 data center managers in facilities across the U.S. and the UK. Among other quandaries, the result determined that 43 percent of data center managers still rely on manual alternatives to DCIM tools for capacity planning and forecasting.
As we begin the year of the zettabyte, it’s counter-intuitive to consider that MS Excel and a tape measure occupy such a prominent place in many facility managers’ tool kits, with one in 10 pulling out their Stanley to initiate expansion or layout changes. Just over half, 55 percent now use DCIM platforms. Chief among the reasons cited against deploying DCIM was the perceived cost, with 46 percent of those surveyed saying they considered it too expensive.
At first blush, concerns regarding cost might appear reasonable. But when you consider that DCIM tools provide data center managers with access to information that can identify problems and help determine the true expense, implications, and causes of outages, the logic of any cost-based objection is exposed as misguided. Across the 118 data centers that were able to quantify it, the average cost per outage was calculated at $28,900. Cue that oil filter tagline.
Make no mistake. Large data centers inevitably face outages and downtime at some stage in their life cycle, whether caused by hardware failure, power supply or thermal problems. But 72 percent of data center managers that deploy DCIM analytics for capacity planning and to monitor cooling efficiency are able to quantify the cost of outages to their business, compared to only 14 percent of those that did not use DCIM at all.
Moreover, as time is money, and the average time to recover from an outage was nearly eight hours, it’s significant to note that 21 percent of the data centers using DCIM for capacity planning and forecasting report that they could recover within two hours, compared to only 11 percent of those without the tool.
Myth 2: DCIM requires too much time and resources to implement
At 35 percent, the second reason data center managers gave for employing manual methods for capacity planning and forecasting was that they worried they lacked the necessary time and resources to implement a more automated approach. Again, while on the surface, this belief may seem perfectly cogent, the reality is that 56 percent of data center managers that employ manual methods need to devote more than 40 percent of their time, every month, to capacity planning and forecasting.
This paradoxical scenario presents a doom loop of the first order and suggests that data managers that employ manual methods for capacity planning and forecasting lack the time and resources to deploy a DCIM tool because so much of their time is wasted on tasks that DCIM is designed to accomplish automatically.
Myth 3: rack level thermal sensors and spreadsheets are sufficient methods to maximize cooling
Data centers consume a staggering amount of electricity. According to the National Resources Defense Council, data center electricity consumption is projected to increase to roughly 140 billion kilowatt-hours annually by 2020, the equivalent annual output of 50 power plants, costing American businesses $13 billion annually in electricity. In addition to the expense of powering servers, a significant portion of energy is required for cooling, making cooling efficiency a desired means to reduce overall costs.
Here, again, paradoxes abound. While 57 percent of data centers surveyed claim they have experienced thermal related challenges during the past year that have affected operational efficiency, and 63 percent are using DCIM analytics to help optimize cooling, as many as 20 percent are relying exclusively on rack level thermal sensors and spread sheets. Those who aren’t using DCIM analytics are less likely to conduct hotspot audits and unlikely to be able to perform Computational Fluid Dynamics (CFD) simulations.
A DCIM system that offers CFD capability as part of its core solution — fed by real-time monitoring information to allow for continuous improvements and validation of your cooling strategy and air-handling choices — can have a direct, positive impact on your bottom line.
Data center managers need accurate data, including information concerning power consumption, thermals, airflow, and utilization in order to take appropriate actions. With DCIM and increased levels of automated control, data center managers regain lost time and receive timely information for the most common challenges of capacity planning and allocations, and cooling efficiency. Any cost-benefit analysis of an investment in DCIM must take into the account the savings on resources, reduced downtime and cooling efficiencies, among other factors, which will positively affect ROI.
In other words, the lessons of a premium oil filter still apply.
If you are interested in learning more register now for our free webinar with OSISoft and contribution from Hewlett Packard.