Country missing? Please select your nearest region...
Heads of both companies DCIM businesses ask each other questions about the new market.
10 April 2012 by DatacenterDynamics -
The Data Center Infrastructure Management (DCIM) market is moving full steam ahead, with Emerson Network Power and Schneider Electric placing themselves firmly at the front of the pack with two solutions that promise to envelope the entire data center.
Schneider Electric’s StruxureWare operations suite was announced last year. The product runs on server clusters with load balancing, unlimited scaling and optional server choice. Using StruxureWare, data center operators can carry out physical location planning, airflow analysis, VMware vSphere and Microsoft System Center VMM Communication for virtualized server management. It also covers the standard space, power, cooling and network functions, among other areas.
Emerson Network Power’s Trellis offering builds on its acquisition of Aperture and Avocent in 2009. With the release of Trellis, Emerson increases its standard monitoring toolset to include server, storage and network infrastructure and power consumption.
Both companies claim to be advancing the area of DCIM with their all-inclusive product offerings that come with the promise of control over the entire data center. Here we put both companies side to side to see how their views on the relatively new DCIM industry have been shaping up. (For background, see our past coverage on both products in our special DCIM edition of FOCUS, Issue 17).
Steve Hassell, president of Emerson’s DCIM division, Avocent, and Kevin Brown, VP of Data Center Global Offer at Schneider Electric, put their top DCIM questions to each other in this discussion. Here is what they said:
Q. KEVIN BROWN (Schneider Electric): What is DCIM and, more importantly, what is not DCIM?
A. STEVE HASSELL (Emerson Network Power): DCIM is getting the IT infrastructure layer and the physical infrastructure layer to operate as one. It’s essentially virtualizing the physical infrastructure, just like what has happened to the IT infrastructure layer. Covering this gap is very broad, so in order for that to happen you still need to draw boundaries.
DCIM does not extend past the hypervisor on the IT side. It’s in the hypervisor because you need to have an idea of what’s happening inside IT – this is the demand side of the equation. You need to understand what’s happening, so you can take the correct actions.
I don’t think a DCIM tool physically moves virtual loads. I also don’t think it’s an IT service management tool that is involved in SLAs. It really starts with where the demand is and extends into the physical infrastructure, and then overlaps a bit but stops short of where the building management system (BMS) is located. You end up with DCIM at the top end, with the BMS to the hypervisor, where you have an interface with the virtual management system and the configuration management database that can interact with the IT essentials.
Q. STEVE HASSELL (Emerson Network Power): What are you hearing from CIOs as to the critical needs driving them to better manage their physical infrastructure and integrate that with the logical systems infrastructure?
A. KEVIN BROWN (Schneider Electric): Most CIOs are being driven by the trend towards ‘do more with less’. In the context of a data center physical infrastructure, this creates a tension between getting more energy efficient and getting more aligned with the business while receiving no relief from availability requirements. To be more energy efficient, operationally efficient and efficient with use of capital, a data center must reduce its safety margins.
This safety margin reduction results in the need for three things: better planning and communication between facilities and IT; more flexible data center architectures; and better tools to manage the infrastructure. It’s myopic to think this is only a conversation about software tools – it’s about planning, communication, business processes and the software tools.
Q. KB: How does a customer get started with a DCIM implementation?
A. SH: You need to take a maturity model approach. Step one: understand what you have. If you don’t know what you are starting with it’s useless discussing how to control it. At that point, you can move to step two, which is starting to apply real-time information monitoring across the data center. This is getting status information as to what’s happening. From that you can start flowing information across the data center.
Then you are ready for the next stage, which I call analyze and diagnose. You can begin asking analytical questions of the information that is flowing through, such as, ‘Do I have enough capacity? Where should I put the next device? How long until it runs out?’. Once you get those answers you can start to add in rules. Then you can automate the entire process and allow the human operators to step back and let the system operate in a more dynamic fashion.
The good news is that every step of this maturity model pays for itself from an ROI standpoint.
Q. SH: Traditional data center design practices have involved several levels of power margins resulting in stranded capacity and lower efficiency. How can DCIM help recapture some of that capacity without compromising availability?
A. KB: The real action in future is going to be simulation of potential changes. We can model a data center today and when properly implemented a DCIM solution should be able to fully simulate changes before they happen. This is the Holy Grail that will only come through a coordinated and well-planned physical infrastructure architecture, as well as a good understanding of IT requirements.
This simulation will become more critical as customers begin implementing ‘zones of availability’ instead of a Tier III data center. We believe customers will begin to think of certain zones in the data center that only require Tier I and others that require Tier II.
Finally, virtualization and machine migration requires better integration between the DCIM tools and virtual machine manager. The software will need to know that the machine is migrating to a Tier III zone and not a Tier I zone if that machine requires Tier III. That only comes through integration between the DCIM tool and virtual machine tools.
Q. KB: Why do you think there has been such a slow adoption rate of DCIM tools by customers?
A. SH: The term itself has only been around for a couple of years. People are still trying to sort DCIM out. There are also a lot of vendors claiming to have DCIM tools, and the DCIM toolset has been quite immature.
There hasn’t been that overarching platform that allows you to start off small and expand out. It’s been a series of point products you have to integrate yourself, which is a lot of work. This isn’t easy – you need a lot of skillsets. It requires a background in thermodynamics, electrical engineering and a broad knowledge of the IT side of data centers and many other areas in order to put together a piecemeal solution.
Businesses will be happy to use tools once they are broad enough and easy enough to use, and can get ROI every step of the way.
Q. SH: There are a variety of new entrants with unique approaches to the market. How do you see these different approaches evolving?
A. KB: We’ve seen, in the past, companies proposing frameworks that were proprietary by nature and meant to replace existing systems. CIOs should be wary of a ‘forklift’ replacement or upgrade.
The additional requirement that CIOs should be looking for is the DCIM vendor’s position on openness. Every tool deployed in a data center should be using open protocols that provide every piece of data that tool contains. In this way, CIOs are protected for the future.
It’s not just limited to the DCIM tools but the monitoring systems in the physical environment as well. CIOs should be looking for monitoring and control systems of facility-level power and cooling plants that have modern open interfaces that are robustly implemented. This isn’t just a question of what’s available today.
Collaboration will only come when vendors are willing to be completely open.