Sometime around 2011, a new phrase entered the IT lexicon: 'software-defined.' The term is sometimes credited to an engineer at VMware, and usually appears in the context of the 'software-defined data center' or 'software-defined network'. The strange phrasing describes how, using distributed management architecture, it is possible to introduce a new level of 'plasticity' to a network or a pool of compute resources, re-making and un-making connections, or re-allocating compute resources, on the fly.
Wikipedia describes SDDC as a 'marketing term,' which perhaps reflects the reality that companies and products in the software-defined category trade on the fact that they might be considered innovative, cloud-oriented, disruptive and therefore of great interest. Many of the early 'software-defined' innovators were acquired at sky-high valuations.
The value and importance put on 'software-defined' in this context was not misplaced. The architecture allows for much of the hard-wired complexity at the component level in IT to be abstracted out and moved to a management 'plane,' thereby allowing for simpler, homogeneous devices to be routinely deployed and managed. This use of aggregated resources is critical to the economics of cloud computing.
But equally important, software-defined architectures make it possible to much more easily and cheaply manage the highly complex and dynamic 'east-west' traffic flows in large data centers and beyond. 5G networks and IoT edge networks would likely be near impossible to build, and uneconomic, without software-defined networking technologies.
For some time, some innovators in the data center have been working on how the model of “software-defined” can be applied to power. Does this disruptive and revolutionary change in IT have an equivalent in the way that power is distributed and managed? Power is, after all, not so dissimilar to the flow of bits: it is a flow of electrons, which can be stored, generated and consumed, is distributed over a network, and is switchable and routable, and therefore manageable, from a distance. A follow-on question to this is: Even if power can be managed in this way, when is it safe, economic and useful to do so, given the many extra complexities and expenses involved in managing power?
The answer to both questions is not quite as binary or as definitive as at the IT level, where the impact of software-defined has been immediate, significant and clearly disruptive. While the application of more intelligence and automation is clearly useful in almost all markets, the opportunities for innovation in power distribution are much less clear. Data centers are a stand-out example: Large facilities present a great opportunity for suppliers, because of the size and growth of the market, vast over-provisioning, high costs, and inflexibility in power distribution and use. But at the same time, the operator’s aspirations for efficiency and agility may be strongly constrained by customer inertia, existing investments, rigid designs and business models, and the need for high availability solutions that have been proven over time (low risk).
In the wider energy market, however, a number of parallel and successive waves of innovation have been sweeping through the industry, as the sector moves to more dynamic and distributed generation and storage, greater use of intelligence and automation, and flatter, transactive models. Suppliers working in the field of power management and storage - ABB, GE, Eaton, Siemens, Schneider Electric and Vertiv, to name a few - have been developing “smart energy” technologies, in some cases, for decades. New entrants – most notably electric car and battery innovator Tesla – have also introduced radical new storage and distribution technologies.
The impact is being seen at the utility grid level, while microgrids, now a mature technology but a young market, provide a model for dynamic and intelligent management of power sources, storage and consumption.
In the data center, similar innovations are underway, although adoption is patchy. These range from the introduction of advanced energy management systems, which may be used to monitor energy use and inform decisions about purchasing, storage and consumption, to microgrid technologies, where power sources and IT consumption are monitored and managed under central control (at present, this technology in a data center context is rare and is most likely to be seen on a campus with a high performance computing center).
But the most obvious connection/comparison with the software-defined data center is the software-defined power (SDP) architecture, as promoted by the small Silicon Valley company Virtual Power System (VPS); and, less obviously, the use of advanced, shared reserve architectures, as promoted by some UPS suppliers, most notably Vertiv. These architectures are very different, but in both cases, one of the key goals is to reduce the need for spare capacity, and to divert or reserve power for applications, racks or rows that most need it, and away from those that need it least.
VPS’ architecture is quite specific: it makes use of distributed lithium-ion batteries in the rack, to provide a “virtual” distributed pool of power that can be used and managed when needed, or to provide back up. In this sense, it is analogous to the homogeneous pool of compute resources in a software-defined data center. VPS deploys a number of small, centrally managed, rack-mounted control devices, known as ICE switches, which can be used to turn off the main UPS power at the rack and thereby draw on the local Li-ion battery.
The management software plays a key role. Effectively, it can divert power from one rack to another – not by using the Li-ion batteries from one rack to power another rack elsewhere (although this is possible, it is more complex because of the power conversion and harmonization requirements), but by switching from the central UPS to battery in certain racks, making more power available elsewhere.
In order to make such a decision, the management software uses ever-changing data about the nature of the loads on the servers and the levels of protection required. Ultimately, although in its early days, loads may be moved around to match the power availability, or moved in order to release power for use elsewhere. As in a software-defined network, the central software is using data and policies to intelligently control relatively simple equipment.
In an SDP environment, the software might be considered as a step on the road to “dynamically resourced” data centers, with capacity reserved for the most critical applications, while other less-important applications may have less capacity allocated (“dynamic tiering” is not the appropriate term, as this is about power availability, not fault-tolerance or maintainability).
SDP software can also use the batteries to offer an extra, supplemental power source during times of peak demand, effectively enabling a data hall to use more power than has been provisioned by the UPS; or it could be used to enable operators to use local power at times when the grid power is most expensive, or when colocation customers wish to avoid going beyond agreed power use limits, triggering extra charges.
Uptime Institute sees the VPS architecture as only one of a number of approaches in the data center that might be classed as “Smart Power” or “Smart Energy,” although not at all the use cases are the same. For example, centralized UPS systems that pool capacity in a “three makes two” configuration, or even up to “eight makes seven” configuration, can use intelligent, policy-driven software switching to maintain capacity, spread loads and reduce risk.
The industry trend is towards reducing the provisioning costs using N+1 or N+x configurations (about 40 percent of all data centers have an N+1 solution), rather than full 2N configurations. But this carries a risk: Uptime Institute Research’s 2018 industry survey found that data centers with 2N architectures experience fewer outages than those with N+1 architectures. Among the operators with a 2N architecture, 35 percent experienced an outage in the past three years, while 51 percent of those with an N+1 architecture had an outage in the same period.
The likelihood is that this is not entirely due to the investment in the extra capacity. It may be argued that as extra equipment is added to make up more granular solutions, and extra connections and switches are used to link up the extra UPS systems, the importance of good management, capacity planning and maintenance becomes more important.
This is where hardware/software combinations such as shared reserve systems or software-defined systems may come into their own. Shared reserve systems can be used to pool the capacity of multiple smaller UPS systems, and then dynamically allocate power from this pool to PDUs using software-managed transfer switches. This involves some complex switching, and naturally worries managers who want to ensure ease of operation and management. But the key is the software – if it is policy-driven, easily operated, and designed with resilience, reliability should be very high - and becomes higher over time as expertise and AI techniques are inevitably embedded.
If power management and control software (and associated equipment) can be deployed without adding significantly to either risks or costs, and can largely match the proven levels of availability that can be achieved by directly using hard-wired physical equipment, then the argument for software-defined power approaches is clear: greater flexibility in how power is directed, managed, and deployed. In turn, power capacity, relative to the workload, may be significantly increased.
Ultimately, this could mean that the data center managers, using a suite of tools (applications), have some centralized control over how power is used, distributed, capped, stored, or even sold to meet changing demands, service levels, or policies. It should enable the building of leaner data centers of all sizes. Suppliers such as Schneider and Vertiv are among those involved in the development effort, working with smaller vendors such as VPS; meanwhile major operators, including Equinix, are investigating the value of the technology. At the same time, smarter, centrally managed constellations of UPS systems and other power equipment are being used to create more granular and manageable reserves of power. Over time, more widespread adoption of these technologies, whoever the suppliers are and whatever the technology is called, seems very likely.
But there are barriers to adoption. In fact, in the “Disruptive Data Center” research project by Uptime Institute and 451 Research, a panel of experts gave SDP one of the lowest rankings (3.4 out of 5) for the likely rate and extent of take up, and the likely impact on the sector. A larger pool of data center operators that we polled was even less enthusiastic.
This skepticism most likely reflects a combination of unfamiliarity, and the possible difficulty of justifying such investments, especially for existing environments that are already stable and where many of the costs have been depreciated. But this early stage skepticism does not rule out later, widespread adoption: direct liquid cooling and microgrids both scored even lower than SDP, yet there is a case for arguing both have a strong, long-term future. For evangelists of these technologies, part of the challenge is in convincing operators to invest in a business environment so heavily grounded in existing architectures.
SDP is probably best viewed as part of a package of separate but interlinked technologies that each need to make progress before paving the way for others. A key one of these is Li-ion (or similar) battery chemistry adoption, both centrally and in the racks; these batteries can be discharged and recharged hundreds to thousands of times without much effect on battery capacity or life compared to lead-acid batteries, opening the way for much more dynamic, smarter use of energy.
Equally, technologies such as DCIM, software automation and AI are regarded cautiously by many operators. As smarter, software-driven tools and systems become more mature, more intelligent, and more integrated, and the use of remote control and automation increases, the adoption of more agile, software-driven systems will increase. Such systems promise greater efficiency and better use of capacity, reduced maintenance, and increased fault tolerance.
Software-defined power will be discussed at the upcoming DCD Energy Smart conference and exhibition in Stockholm, Sweden, in April. To register your attendance, visit the event website - data center end-users and operators qualify for a free pass.