The next industrial revolution, we’re told, is the Internet of Things (IoT). Smartfridges will order smartfood, while your smartwatch will tell you when the next smartbus is due.
The idea is that we add automation and control to the way our machines work together. As this cuts across the normal compartmental barriers, new insight, efficiencies and opportunities will drive new business models and create the magical more-from-less that’s the perennial promise of IT to its masters.
New data flows
Data centers will be at the heart of all this, marshalling and analyzing the huge new data flows that power the IoT – and to do this, they will need a lot of new engineering to balance their own efficiencies, costs and capabilities.
A lot of basic engineering awaits, and a lot of people are keen to sell data center companies new hardware and services to help support the transition.
The data center itself is a candidate for IoT techniques. Quite aside from the normal considerations of monitoring and controlling client data flow, storage requirements and access capabilities, the machinery of the data center can be intelligently linked to provide efficiencies that reduce operational expenditure and create new client services.
Data center infrastructure management (DCIM) is a well-known product sector that has been providing an embryonic IoT within the facility for a while, as it learns to mesh information from many different sources within the data center. So far, DCIM has tended to focus on automation and alerts rather than data analysis. This is where work in IoT thinking can feed back into DCIM.
Normalization is very important. One of the challenges for IoT is that while many machines and existing intelligent controllers generate the right operational data, it’s in a proprietary or uncommon format. Normalization means translating all of that into a common format before storing it, and that’s essential for any long-term analysis. There are a plethora of industrial protocols – Modbus, BACNet, SNMP, WMI being just some of the more common – that need a common transport and translating to a common nomenclature before finally turning the actual data into common units.
The data center itself is a candidate for IoT techniques, and DCIM has been providing an embryonic IoT within the facility for a while, as it learns to mesh information from many different sources.
For example, one power management unit might report data as OUTPUT_CUR, another as DCRAIL1, with the first being an integer representing amps and the second a floating point number with tenths of an amp resolution. Within a particular DCIM suite or the manufacturer’s own software, this doesn’t matter, but to compare the two either in real-time or over an extended operational period, these must be stored with the same name in the same units.
Why bother? Well, there are at least two reasons. Taking an IoT approach will allow you to understand and control your operations through a common set of tools, which should be able to cope with future changes. It will also be useful to your customers, who have an interest in knowing the efficiency of their operations.
Having a common, deep data set, you can easily provide much more information as a service to them, or provide tariffs that reward efficiency while keeping margins. There are already signs of this in the data center market – American colo company RagingWire gives its customers access to a wide range of internal data through its N-Matrix service, with visualization tools to help map their usage of its resources and plan ahead – but IoT will create more opportunities for everyone to develop these ideas.
New models are coming
Once a lot of normalized data is available, new models suggest themselves. You can relate power usage to changes in your workload by combining server reports with electricity meter values or thermal changes, and then use this to drive automated load amalgamation within your data center, or even transfer loads to other sites.
At one extreme this lets entire sections of your equipment be idle, while at the other it can provide early warning of the need for more provisioning.
Long-term trend and correlation knowledge is the only way to do this efficiently, and that requires IoT techniques for large data set compilation and analysis.
By bringing more of the machinery of the data center into the overall management system, other savings are possible. Ambient cooling can improve power usage effectiveness (PUE), but it needs filters on the fans that pull in the outside air. And these clog: as regular inspection is expensive, they tend to be replaced on a regular cycle, whether they need it or not. If the fan units report speed or current draw, dirty filters are easily detected as RPM, or power requirements start to rise – no inspection needed, but much better resource management.
This is also an example of lifetime management, which becomes much simpler with an IoT approach, which not only reduces costs but has significant safety and uptime ramifications. In the same way that hard disk error-monitoring catches not just instances but patterns of failure, more in-depth knowledge of UPS battery conditions and thermal cycling within cabinets can constrain risk and reduce costs.
One of the reasons why IoT is being promoted so heavily is that vendors expect it to provide a single software and hardware integration platform for vertical industries – healthcare, transportation, energy, retail and, yes, IT itself – which have traditionally developed and guarded their own infrastructure fiefdoms. This provides the same economy of scale for industry as the internet has done for information.
Thus, in all areas of industry infrastructure the next generation of smart devices, including those common in the data center, will start to talk IoT protocols, in much the same way that all pre-internet networking standards were subsumed or rendered extinct as the IP-based systems took over.
Standards are coming
These standards are already being introduced. Last month, GE and Pivotal announced the development of open-source support for IoT protocols such as MQTT, CoAP and DDS in the Cloud Foundry consortium. Cisco is actively developing MQTT, CoAP and XMPP, and IBM, well, it invented MQTT, while Intel has it at the core of its IoT thrust. On one level, these standards simply do the jobs that older, proprietary protocols did, but this generation of protocols are explicitly designed to work over the internet, to interoperate, to be supported by entire industries, and applicable to the widest range of uses.
The Internet of Things certainly isn’t about toothbrushes ordering toothpaste or monitoring your central heating from a beach in Rio de Janeiro. Its first and most abiding impact will be in bringing industry into the internet age.
In many ways, data center management got there first – but there is still a risk that it might miss the next big wave.