The software-defined data center (SDDC) is a trend that has been growing in importance for several years, driven in part by factors such as virtualization and cloud computing. Infrastructure has been evolving to be more dynamic and adaptable in order to serve a business requirement for greater agility. But what is an SDDC, and how do you get there from the systems in place now?

A simple definition of an SDDC is that it is a data center where the entire infrastructure is configurable under software control. This does not mean just servers, but also the networking used to interconnect them and the storage resources available to service the needs of applications.

As an example, look no further than the data centers operated by the large Internet companies such as Google, AWS and Facebook, which need to be able to respond to constantly shifting demands on resources from customers. These have set the pattern for other service providers and enterprises to follow.

Software-defined data center
– DCD

Follow the leader

These hyperscale companies built their data centers to be this way from the outset, but only need to support a limited range of services. Other organizations are likely to have to deal with a wider range of uses, and legacy infrastructure that they cannot replace all in one go. The hyperscale operators also have the resources to develop their own automation and orchestration tools, while others will have to make do with off-the-shelf commercial or open source tools.

According to Ovum principal analyst Roy Illsley, software-defined networking and software-defined storage are set to be the fastest growing areas of IT spend over the next few years, followed by cloud orchestration and management.

“This tells you that the market is heading towards the fact that everything is going to be software-defined, everything is going to be software controlled, and the internal spending of the IT department is increasingly shifting to software rather than hardware now,” he said.

But there is no single definition of what software-defined means. While it is fairly straightforward for servers, in that it usually means dividing up hardware resources among multiple workloads using virtual machines or containers, things are not so simple for storage or networking.

Software-defined networking (SDN), for example, separates the network’s data plane, which forwards data packets, from the control plane, which manages overall traffic flow. To support SDN, switch vendors such as Cisco have to make their hardware configurable using protocols like OpenFlow, so the way they route traffic can be dynamically managed by a central controller.

However, SDN platforms such as the OpenStack Neutron service and VMware’s NSX run on the servers and manage traffic between virtual machines, using software-based switching. They also support the creation of virtual networks that overlay the physical LAN, but enable a different range of IP addresses and security policies.

Software-defined storage (SDS) is also tricky to pin down. Perhaps the most common definition is a distributed storage service, such as Red Hat’s Ceph or Gluster products. These are used to create a scalable pool of storage from the combined resources of a cluster of server nodes, and present it as block storage, object storage, a file system, or combinations of these.

Meanwhile, SDS can also refer to storage virtualization such as DataCore’s SANsymphony. This abstracts and pulls together existing storage infrastructure, including arrays from third party vendors into a virtual SAN. It then provides a unified set of storage services for this pool of storage, including quality of service, thin provisioning and auto-tiering.

However you characterize it, the purpose of software-defined infrastructure is to be flexible: configured, controlled and monitored by higher level management tools. This could be via configuration management software such as Puppet, or orchestration platforms such as OpenStack or Mesosphere’s DC/OS.

But many organizations will have a lot of legacy kit that may not lend itself well to this model of operation. This means that they may be forced to operate a “two-speed” IT infrastructure while the refresh cycle gradually brings in modern kit that can be software-defined.

To address this, some newer platforms are described as pre-packaged SDDC solutions. A good example is VMware’s platform, which is based on three pillars; vSphere for operating virtual machines; vSAN, its software defined storage product; and NSX, which provides software-defined networking.

These are combined with suites of management tools in various ways to deliver products such as VMware Cloud Foundation, which can be deployed onto hyperconverged systems hardware in a customer’s own data center or on a public cloud, as with VMware Cloud on AWS.

Microsoft touts Windows Server 2016 as an SDDC platform, thanks to Hyper-V for operating virtual machines, Storage Spaces Direct and Hyper-V Virtual Switch, plus the System Center suite for management.

There are other similar offerings, and most require the customer to purchase a complete integrated platform. These can start with a few nodes and scale out to rack level, or even larger, but all essentially lock the customer into one vendor’s platform.

Opening things up

Software-defined data center
– DCD

If you prefer an open source software alternative, there is the OpenStack framework. This has a modular architecture made up of numerous separate projects, with the core modules including Nova for managing compute, Neutron for configuring networking, plus Cinder, and Swift for block storage and object storage, respectively.

OpenStack is notably used by CERN, the European particle physics laboratory, to manage tens of thousands of compute nodes forming the IT infrastructure serving the Large Hadron Collider and other experiments.

Thus far, this article has only touched on IT infrastructure, but data centers also comprise other facilities such as power distribution and cooling. Might these also be managed under software control in order to make the most efficient use of resources?

Software-defined power is starting to get some attention from vendors such as Virtual Power Systems (VPS). This firm has developed its Intelligent Control of Energy (ICE) technology to enable the use of UPS batteries to meet some of the power demand during periods of peak loads. This means that the power distribution infrastructure does not have to be over-provisioned to cope with peak loads that may only occur infrequently - and the data center owner may get a rebate from the energy utility.

In terms of cooling, data center infrastructure has become smarter, but is rarely marketed as “cooling on demand.” One early approach was from HPE, which some years ago (as HP) touted a combination of sensors and computational fluid dynamics (CFD) to analyze the flow of air within the data center and route cold air to where it was most needed.

More recently, liquid cooling proponents mention their technology’s ability to remove heat precisely from where it is generated, and Inertech, a subsidiary of Aligned Energy, is offering a system where the cooling units are distributed and positioned above the racks, so cooling can be delivered on demand according to requirements.

It seems clear that all aspects of data centers are becoming instrumented, and are controlled more precisely by software. The future of the data center is software-defined - but two questions remain: what exactly will that look like, and how long will it take for organizations to get there?

This article appeared in the April/May issue of DCD Magazine. Subscribe to the digital and print editions for free here: