Over the past several years, ’software-defined’ has become another of those terms that gets used so much that its meaning becomes blurred. However, software-defined infrastructure is a key part of the way data centers are currently being reshaped in order to meet the changing requirements of modern IT and applications, which are now more distributed and dynamic in nature.

One of the stumbling blocks that enterprises and data center operators face is that IT infrastructure is increasingly complex to deploy and configure. Applications often depend upon an array of other services and resources to be in place before they can function properly.

Solving this problem calls for a certain degree of automation, according to Andy Buss, consulting manager for data center infrastructure and client devices at IDC Europe: “Any business that is on a journey to digital transformation needs automation, as automation is the ability to put actions into IT policy,” he said, adding that “moving from dev to ops is about being able to deploy [infrastructure] as easily as possible.”

automation 1
– DCD

Send help

The implication of this is that the infrastructure must be capable of being reconfigured or repurposed using software, rather than having engineers go into the data center itself and physically reconfigure the infrastructure.

This kind of approach is already used extensively by many of the biggest Internet companies, such as Google and Amazon Web Services (AWS), as it is a fundamental requirement of operating a public cloud platform, where services may be continuously starting, scaling up, and eventually releasing resources again when they terminate.

In other words ’software-defined’ refers to infrastructure and resources that are decoupled from the hardware they are running on, and are able to be dynamically configured and reconfigured under software control, typically using application programming interfaces (APIs).

Among the first components of IT infrastructure to get the software-defined treatment were servers, through virtualization. A virtual machine running in a cloud is effectively a software-defined server, since it has been decoupled from the underlying physical hardware, and can be moved from one physical server to another in the event of a hardware failure, or for other reasons such as load balancing.

The past few years has seen the rise of containers, which are less demanding of resources than virtual machines. Platforms such as Docker also have more of a focus on distributing and managing applications as a collection of containerized functions.

As trends like cloud computing have become an increasingly important part of the data center, the move to make more and more of the infrastructure software-defined and therefore more agile has grown.

You say potato

Automation
– DCD

’Software-defined’ can mean different things when the term is used by different vendors, and there is not always a widely agreed standard definition. For example, software-defined storage can include storage virtualization, such as EMC’s ViPR platform, which enables a customer to build their storage infrastructure using a mix of arrays, including those from multiple vendors, and manage it all from a single console.

But software-defined storage more commonly refers to software such as the open source GlusterFS that runs on a cluster of commodity server nodes and uses these to create a scalable pool of storage. This model is seen in hyperconverged infrastructure (HCI) systems, in high performance computing (HPC) clusters, and in very large Internet companies like Google and Facebook because it is easy to provision and scale.

Software-defined storage is only as good as the hardware it is running on, and may not be as reliable as purpose-built enterprise storage arrays. The assumption is that redundancy and other capabilities like data replication will be taken care of elsewhere in the software stack.

Software-defined networking similarly covers a number of different approaches designed to make the network infrastructure more dynamic and easy to configure as required.

One approach is to separate the control plane, or management part of the network, from the forwarding plane, typically the switch hardware that actually routes packets around the network. The idea here is to centralize control over network traffic, using tools such as OpenFlow, a protocol designed to provide a standard mechanism for this, which is supported by many switch vendors, such as Dell, HPE, and Cisco.

Killer combo

Automation 2
– DCD

A different approach is to virtualize the network, allowing the creation of logical networks that use the physical network to move data around, but which do not necessarily have the same IP address scheme, and may each differ in their quality of service and security policies.

This latter approach is typified by VMware’s NSX and the Neutron module in the OpenStack platform, and is vital for supporting multi tenancy in data centers, especially those hosting public cloud services.

Virtualizing the network not only enables new network connections to be created without having to physically move patch cables around, but in the case of VMware’s NSX, it also enables greater oversight of network traffic, since the switching and routing capability is integrated into the hypervisor and distributed throughout the infrastructure.

These strands – software-defined compute, storage and networking – do not exist separately, but are very often interdependent. Pulling them all together is the ultimate aim, in order to deliver what is termed by Intel and others as the software-defined data center (SDDC).

With SDDC, the entire data center infrastructure should be configurable under software control, making it easier to automate so that IT staff can devote less time to simply keeping the lights on, and this means that the fourth piece of the puzzle is management and orchestration.

“It’s pointless having software-defined anything without automation and orchestration,” said Buss. He cited tools such as Microsoft’s System Center Operations Manager (SCOM) and VMware’s vCloud Director as examples, but added that these have yet to reach the same level of maturity as those used by the big cloud providers.

Also less mature but gaining support among service providers is OpenStack, which is open and non-proprietary and thus independent of any single vendor, and which presents a set of APIs that can be used to plug in other software and services.

Overall, the SDDC approach has passed an inflection point and is growing in importance, according to Buss: “The public cloud demonstrates it is feasible, and as we move to a multi-cloud world, there will be a need for compatibility between clouds which is driving a lot of thought in the industry now.”

A version of this article appeared in the April/May 2017 issue of DCD magazine