Archived Content

The following content is from an older version of this website, and may not display correctly.
software development operations devops Thinkstock buchachon
– Thinkstock buchachon

As IT infrastructure and operations have become more complex, technologies such as cloud and virtualization have been implemented to adapt to various business needs. Over the past decade, server virtualization has redefined the deployment, management and optimization of computing resources, transforming the data center into a more adaptable and efficient platform for business applications. Application loads that once ran on static, dedicated servers are now hosted in dynamic, virtualized server environments that can be scaled and shaped on demand.

While virtualization reshapes data center operations, enabling enterprises to deploy affordable, rack-based servers that can be pooled and allocated to shifting application demands, the transformation is incomplete. Network and storage assets in data centers remain tightly siloed and statically configured. Few facilities are capable of automating and orchestrating the management of pooled network and storage hardware.

The software defined data center (SDDC) claims to change that. It is described by VMware as: “A unified data center platform that provides unprecedented automation, flexibility, and efficiency to transform the way IT is delivered. Compute, storage, networking, security, and availability services are pooled, aggregated, and delivered as software, and managed by intelligent, policy-driven software.”

A data center as big as The Ritz

Building on what virtualization has applied for servers, SDDC will be able to virtualize network and storage resources to enable an abstracted data center infrastructure that can be managed and accessed by software and applications. The goal of SDDC is to deliver benefits in many facets of data center operations: more efficient use of resources; easier provisioning and reallocation; and faster deployment of new applications, etc.

Ultimately, SDDC will eliminate the need for IT technicians to manipulate siloed server, network and storage hardware in response to a provisioning request. Rather, provisioning takes place automatically within the framework of defined rules, policies and service-level agreements (SLAs), passed via application programming interface (API) calls to the automation and orchestration engine that configures the appropriate resources from a pooled environment.

The Impact SDDC Has on Data Center Infrastructure

Because of the increase in dynamic resource allocation, enhanced power and cooling infrastructures are required to support the scalable demands of data centers. Power and cooling play an important role in making the SDDC vision real. While IT resources have been virtualized with a layer of abstraction, very little such abstraction exists in the data center itself. Even in facilities with a building management system (BMS) or data center infrastructure management (DCIM) system, the extent to which power and cooling have been abstracted is often insufficient to achieve the full benefits possible with SDDC.

For data centers, facilities equipment plays a critical role in ensuring that SLAs are aligned and met. Data center operators therefore have to develop integrated and adaptable power and cooling solutions that are in line with planned and provisioned infrastructure capacity.

Theoretically, numerous VMs can be deployed with the virtual layer of abstraction, but the amount of power and cooling available in any data center is finite. Therefore, data centers have to optimize power usage, and operators must redefine key integration touch points of data centers - and build management systems, infrastructure management and monitoring systems.

To fulfill the future promise of SDDC, the addition of software-defined power is needed. The potential of software-defined power can be reached if the industry reaches a consensus on a solution with reference architecture and common standards. This will help to provide power to data centers based on demand consumption, rather than planning and provisioning the power and cooling requirements based on pre-existing knowledge of peak systems usage.

More than half of all application downtime today is caused by power problems. Including power and cooling as software-defined elements in the application environment therefore makes it possible to improve availability by fully abstracting applications from all physical resources within any the data center.

A Battle between Hardware and Software Vendors

In recent years, IT systems in data centers have gone through a series of dynamic shifts. There has always been competition between the hardware, software and services suppliers in defining systems. For example, Network Function Virtualization (NFV) is being used to target Huawei and Ericsson in the telecoms sector, while SDN and Open Flow are used to challenge Cisco’s Enterprise Network dominance. Similarly, SDS is used against EMC and others with a large proprietary storage array business. Cloud services play the same role as the data center outsourcers to take away the need to build systems at all.

It is important to remember that ‘standard’ does not mean ‘open’ and that all data center software has to be run on some sort of hardware. VMware and Microsoft virtualization software is proprietary, as are IBM SVC and NetApp OnTap in storage. Also (almost by definition) Open Stack, Open Flow and other Open Source initiatives remain work-in-progress.

It is important to remember that ‘standard’ does not mean ‘open’ 

It is wrong to assume that all applications ought to run in virtual machines; that it’s always best to separate the control and data planes; or that what everyone wants is a system design that can do anything elastically at any time. IT investments are always spasmodic and form different technology layers in most data centers. In addition, it is not easy to adopt a full transformation strategy. For example, a company adopting Cisco’s Application Centric Infrastructure (ACI) will have to buy a lot of Nexus switches in order to achieve SDN since it will only run on them.

There are winners and losers already

If ‘Software Defined’ approaches are viewed as part of the battle between hardware and software suppliers, then it can be argued that VMware has won in x86 server virtualization and Amazon Web Services has won in IaaS. However, the jury is still out in the storage and networking realms. It is true that most data centers are characterized by a plethora of management tools and infrastructure software. But, rather than by adopting a complex new architecture, it is probably better to address such a situation by consolidating down to a smaller number of packages and suppliers.

There are also movements away from the need to have all systems built with virtualized x86 machines, such as the docker and container-based approaches, IBM’s Watson and little-endian Power servers for Linux, HP’s ‘The Machine’, etc. The industry should also look at the development of application program interface (API)-defined and catalogue-defined approaches.

SDDC and the Cloud

The Cloud can be viewed as a marketing term for application, platform, or infrastructure services that internal or external customers procure on demand through web forms. By contrast, the software defined data center is the mechanism through which cloud services can be delivered most efficiently.

The long-term vision is to transform IT into a service that can be provided to end-users/consumers. Currently, the best way to achieve that vision is through cloud computing models like IaaS and Platform-as-a-Service (PaaS). Technologies like SDDC strengthen the ability to achieve that long-term vision by enabling IT models like cloud computing.

SDDC goes beyond traditional abstraction above core hardware assets and establishes a single toolkit that also encompasses clouds. Potentially, an SDDC implementation could allow servers and other hardware to be shut down or run at lower power levels, which has implications for energy use. Some experts see SDDC as a more secure option to the Cloud. SDDC provides organizations with their own private cloud, allowing them to have far more control over hosted data.

SDDC would start with a pool of complex, industry-standard hardware that can be portioned out dynamically and defined by software rules and limits. It brings together the cloud infrastructure characteristics that are critical for success:

›› Standardization: Standardized hardware creates efficiencies with the resource pools. Creating an environment based on standardized hardware removes unnecessary complexity within the data center dynamic.

›› Holistic: Cloud infrastructure is designed to support any and all workloads and to do so in the most optimized way possible across the entire data center.

›› Adaptive: Cloud infrastructure must be dynamic in its ability to adapt to changes in the resource workload. This adaptability should be automated and built on defined configurations and according to the demands of the applications it runs.

›› Automated: Automation is the hallmark of quality cloud infrastructure. When using software to define the data center space, the framework must have built-in intelligences to eliminate complexity and create elastic computing without needing direct human guidance.

›› Resilient: SDDC must be able to compensate for hardware and software failure. Coupled with automation and adaptability, the network should be automated in its approach to adapt to possible problems and continue with the highest level of availability.

Data Center Architecture in Transition

Server virtualization has greatly improved data center operations, providing significant gains in performance, efficiency and cost-effectiveness by enabling IT departments to consolidate and pool computing resources. Many organizations are now looking to extend virtualization to network and storage resources.

By employing an abstraction layer to bring intelligent, centralized management to the entire data center infrastructure, organizations are able to transform their data centers from deeply siloed operations defined by hardware components into highly automated and effectively orchestrated resources designed around software. To leverage these capabilities, enterprises must adopt a strategy aimed at establishing the software defined data center (SDDC).

The benefits of implementing SDDC are many. Pooled server, storage and network hardware reduces the need for specialized components and servers in favor of affordable, off-the-shelf hardware that is easier to maintain. Most importantly, the SDDC enables automated, policy-driven provisioning and management of data center resources.

Program interfaces make it possible for applications to request resources based on clearly defined rules and policies. The result is a more responsive, agile, secure and high-performing data center that takes full advantage of the underlying hardware.

It should be noted that simply virtualizing a data center does not result in a software defined data center. One of the principal goals of ‘software defined’ is to support the cloud data center. Providers such as Amazon, Google and Microsoft are examples of cloud-based infrastructures. The ability to dynamically allocate and provision resources is facilitated through automation and orchestration. The aim of the SDDC design is to allow the enterprise to inherit the orchestration ability of the aforementioned public cloud providers without possessing specialized hardware platforms.

With the potential to be both truly revolutionary and safe, SDDC provides the capabilities that enterprises need in the cloud. It can free an application completely from its physical infrastructure allowing for a wide scope of uses including deploying, managing, storing, computing and networking a myriad business applications in the cloud. As data center technologies continue to evolve, DCD Intelligence believes that hardware and software platforms will become more interconnected.

Hawkins Hua is an analyst with Datacenter Dynamics Intelligence.