When data center infrastructure management (DCIM) first appeared, it attracted great interest from analysts and end-users, many of whom expected that it would become a prevalent software tool, and completely indispensable to the demands of a growing data center industry. As such, the hype cycle accelerated, with analysts such as Gartner touting it as the next significant technology and vendor business opportunity to disrupt our industry.
But as is often the case with any emerging technology, the reality of how it impacts the market can differ from the predictions made during its “hype stage.”
Factors expected to drive DCIM adoption included the ability to drive streamline operational efficiency, to help end-users monitor and reduce energy consumption and to maximize reliability – all while providing a tangible return on investment (ROI) and the ability to manage large, disaggregated IT portfolios with ease. Fast forward and the reality of the business opportunity was, well, a little different. While DCIM proved a raging success for some data center managers, it failed to meet the expectations of others, and where some found major benefits others felt it was a wasted investment.
More recently, the market has recently experienced some retrenchment and consolidation. Last autumn, for example, Vertiv, whose Aperture asset management software was once one of the most widely adopted DCIM products, announced it was discontinuing its flagship Trellis platform. Rival DCIM vendor Nlyte was acquired by Carrier, a specialist in cooling equipment.
This didn’t help to build confidence in the capabilities of DCIM. The perception that leading platforms were disappearing from the market, with support for existing Trellis contracts ending in 2023, has left many data center stakeholders bewildered.
Could it be, in fact, that DCIM has become an overblown luxury that most organizations can’t afford or don’t need? As is always the case, the truth may be a little different…
Luckily for the end-users realizing the benefits of DCIM, investments being made in user experience, data science, machine learning and in remote monitoring have begun to change its perception. And while it is undoubtedly true that many data center operators see its key strengths primarily in monitoring and management, the reality is that it became invaluable during the pandemic, especially where accessibility, visibility and continuity were fundamental challenges for our industry’s key workers.
Reliability was, and is, the name of the game, and DCIM platforms offering simple installation, intuitive ease of use and real-time data-driven insights, certainly saw increased adoption among our end-users. Even the 2021 Uptime Institute annual data center survey revealed that 76 percent of operators felt their most recent downtime incident was preventable with better management, processes or configuration.
Concerns surrounding environmental impact, sustainability and energy efficiency have also grown in line with the changing end-user demands, especially within the colocation space. A Schneider Electric report with 451 Research revealed 97 percent of customers globally were demanding contractual commitments to sustainability. Monitoring, measurement and management are of course critical to an organizations sustainability efforts, positioning software, again, as an invaluable tool. However, the grand expectations that DCIM alone would spearhead major efforts throughout the industry to improve energy efficiency and sustainability have yet to be realized.
Implementation, you see, still remains key – understanding the business case, helping the customer to deploy the software, ensuring that all assets are monitored correctly while benchmarking their performance is essential work. Regardless of how important a task it may be, for many legacy operators it is no small feat, and with a burgeoning skills gap, the ability to find the right team for the job can often leave organizations helpless.
There’s also the procurement cycle to address, which requires multiple stakeholders. The responsibility for managing data center infrastructure, even those typically addressed via DCIM tools, sits between IT, facilities and M&E departments – often with different objectives and chains of command. Finding the right person to sign-off on a new DCIM project, or even identifying the right group of people to first agree to its use, was once upon a time, a challenge. Luckily the business case is changing, and while the first versions of DCIM required considerable time and effort in terms of customization, the newer, or next-generation versions, can simplify the process significantly, bringing siloed teams together.
A new outlook
Nowadays there remains a pressing need for tools to manage the various functions of a data center efficiently. The real capabilities of DCIM, especially the recent versions deployed over the cloud, allow businesses of all scale to identify what their assets are, where they’re located, and how well they are performing. Further, they can proactively identify any status or security issues that need to be addressed, or any gaping holes in the infrastructure, so to speak.
Any company that subscribes to ISO 27001, the global standard for IT security, must be able to track its assets and the people who have access and control to those assets. As such, cloud-based DCIM deployments can offer major benefits and allow distributed assets to be monitored and managed at relatively low cost.
Another critical concern is minimizing downtime. Here, a reliable and vendor-agnostic DCIM platform can provide insights into all key power paths, especially if they comprise equipment from multiple manufacturers. The ability to track dependencies, to minimize potential risks to a mission-critical environment from a single piece of equipment, such as a power distribution unit (PDU), uninterruptible power supply (UPS) or a cooling system, can be identified and potential outages mitigated.
It also remains essential for DCIM software to interact with legacy systems, facilities management suites, IT and network management software. This is best achieved through the use of application programming interfaces (APIs) that allow high-level information exchanges between disparate tools. Some analysts have opined that a particular weakness of Vertiv’s soon to be discontinued Trellis platform was its dependence on Oracle Fusion application development tools, which tended to limit its attractiveness to customers outside of Oracle’s environment. The fact remains, however, that in a world full of distributed data centers, interoperability is essential for all management tools.
Building a business case
So the big question is, how can you accurately measure the return on investment (ROI) from DCIM? Some may say it’s still an expensive overhead and difficult to quantify when you utilize hardware assets from multiple vendors, but there’s magic in vendor-agnostic monitoring capabilities, which quickly address that barrier.
Therefore, it could be that you calculate your ROI by tracking the amount of downtime you’ve avoided, and the reputational damage or cost implications that may have impacted your business. It could be via a reduction in power consumption, improved cooling efficiency and thereby reduced PUE.
Maybe it’s the cost of your energy bills reducing, or the ability to measure and lower the carbon impact of your IT estate? Or maybe, just maybe, it’s about mission-critical reliability, and how you use the insights gleaned to work with service partners to balance the cost of managing distributed sites.
I’ve championed DCIM from the start, consulting with customers and helping them implement the software and gain the best from its capabilities. With new investments being made all the time into data science and machine learning capabilities, I'm confident that finding an ROI is far more simple than many end-users realise.
The most immediate and obvious benefit is DCIM’s ability to provide real-time visibility, which is pivotal as we transition towards a greener, more sustainable and more digitally dependent future.