Archived Content

The following content is from an older version of this website, and may not display correctly.
Virtualization has, to steal a cliché, eaten the IT world. Starting with compute, moving to storage, and on to networking, it’s become the foundation of the modern data center; letting us get more bang for our buck. Each new layer of virtualization adds more abstraction, and with it, more ways of controlling and automating our systems.

The more automation we put in our data centers, the closer we get to the ideal of the software defined data center (SDDC), where managed storage and virtualized compute blend with software defined networking and the new generation of security technologies into a single abstract platform that can be tuned to meet business needs, at the click of a mouse and the tap of a screen. In the ideal SDDC we’re presented with a set of layers that can be dynamically combined to construct infrastructure as and when it’s required.

It’s about the infrastructure
Treating compute, storage and networking as linked fabrics makes a lot of sense. If we aggregate compute to handle a particular problem we’re going to need one network, but we can reconfigure everything to add ecommerce capability in the holiday season. Taking advantage of the various virtualization technologies to deliver these capabilities makes sense – but we need a set of tools that coordinate all the various elements.

It’s possible to reconfigure virtual servers, virtual storage, and virtual networking manually, but this is as complex as rewiring and reconfiguring a physical data center, especially when we need to coordinate the process across the virtual infrastructure.

If we want all the benefits of virtualization we need the SDDC, which gives us one place to manage and reconfigure all the virtual components of a modern data center, and one place to coordinate their operation.

Jumping from a physical data center to an SDDC isn’t easy. It means thinking carefully about the underlying physical infrastructure, taking a tip from the way hyperscale cloud services operate. You’ll need to consider standardizing disks and SSDs, as well as storage appliances. Then there’s the networking side of things, with fiber and Ethernet interconnects for storage and SDN compatible switches. Finally you need to consider compute, choosing a standard blade server and racking.

Big players like Facebook can specify everything, a process made more widely possible through the Open Compute project. Whatever size your needs are, making sure you’ve got your suppliers in place is key to a successful SDDC transition.

SDDC: your very own private cloud
A key benefit of a software defined infrastructure is the ability to offer users self-service operations. Why should they have to wait weeks or months to get access to a server, when it can be deployed from a gallery of virtual server images, complete with pre-configured applications? Preparing those will take time, but once the gallery is in place, it will simplify user interactions and reduce the risk of rogue virtual servers affecting operations.

Once you’re doing that, of course, you’re running a private cloud. That’s not surprising, as the software-defined data center is at the heart of the private cloud model, and does require a significant shift in the way you think about your infrastructure and operations. 
A well designed private cloud can be operated resiliently, like the hyper-scale public cloud, with the ability to ignore hardware failures and batch up repairs by swapping out entire storage and compute modules.

Building an SDDC is simplified by the various private cloud toolkits currently available, software that let you take existing virtual infrastructures and layer on Infastructure as a Service (IaaS) or platform as a service (PaaS) features. You’re not limited to a single vendor either: there’s a mix of open source and proprietary software that comes together to deliver this ideal.

If you’re running a Microsoft infrastructure, then the combination of the System Center management tools and the Windows Azure Pack work together to deliver a simple IaaS private cloud, complete with user portal. It’s not the full blown SDDC experience, but it comes a lot closer than many options, with tools for rapid deployment of pre-configured server images, along with the management of virtual networks between servers as part of System Center’s orchestration tools.

It’s the orchestration tools, and their associated run books that really make this an SDDC platform – even though it requires the combination of tools. New features in the upcoming Windows Server release (currently in technical preview) make it clear that Microsoft is committed to the SDDC vision, with support for software defined networking and improved storage management at the heart of the new OS.

Microsoft’s tools also support more than its Hyper-V hypervizor, with the ability to manage VMware virtual machines, and with support for more than Windows images too.

The Windows Azure Pack also encourages IT departments to make significant changes into how they operate, giving them the tools to manage chargeback billing and to give different classes of user different classes of service. Developers building departmental apps might only be entitled to bronze service, limiting the capabilities of their virtual servers, networks, and storage, while the ERP system could get gold service.

VMware’s recent run of acquisitions goes a long way to delivering an SDDC too, with tools for managing storage, for handling virtual networking, and for orchestrating and managing system deployments. Its vSphere management platform handles all the key elements of a virtual infrastructure. That’s matched with vCenter, a higher level set of tools that include orchestration functions and the ability to deploy virtual machine profiles across multiple hosts.

VMware’s tools provide much of the functionality to manage the infrastructure aspects of an SDDC. Where they do fall down is that all they do is manage infrastructure, not the services that depend on that infrastructure. You’re going to need additional tools to deliver the self-service aspects of an SDDC, and to encapsulate and manage applications and services that are running on those virtual machines.

You can use the tools VMware offers in conjunction with Pivotal’s Cloud Foundry to add more self-service as well as PaaS features to simplify complex application development. If you’re considering an open source option, OpenStack offers many of the key technologies that make up an SDDC.

A single management interface lets you control compute, storage and networking, with support for many common hypervisors and key software-defined networking technologies. OpenStack has garnered a lot of industry support, and while not yet fully mature, offers an alternative to the proprietary approach of VMware and Microsoft.

Docker ahoy!
New technologies, like the Docker containerization service, look likely to simplify the process of building and running software defined data centers. By thinking in terms of workloads rather than virtual servers, Docker lets you change the way you encapsulate data and applications, allowing them to run in single server instances – speeding up deployment and launch, as there’s no need to stand up an entire VM. With Microsoft adding support for containerization (and the Docker client) in the next release of Windows Server, it’s a technology worth exploring.

There’s also the option of buying the heart of an SDDC off the shelf. Having consistent hardware simplifies the process of building and running services – as it reduces the complexity of managing the infrastructure.

Cisco’s Unified Computing System delivers blade servers, storage, and network hardware, together with the control software needed to manage a rack full of devices. While UCS goes much of the way to delivering the SDDC experience with its embedded controller software, you need to provide additional management tools based on your choice of hypervizor (there’s support for bare metal solutions from Microsoft, VMware and Citrix among others). You can access UCS’s UCS Manager from a browser.

There’s also API access for programmatic management, and there’s support for this in VMware’s vSphere.

One interesting element of UCS that helps it work as SDDC infrastructure is its support for stateless management. There’s no set configuration for any of the blade servers in a UCS rack – it’s all delivered by the UCS software, making it easier to rapidly configure compute nodes, and to quickly move services from one node to another, without having to reconfigure software, as settings can move from blade to blade.

The run book model Microsoft uses is key to how SDDC self-configures. Technologies like Microsoft’s PowerShell, like the open Puppet Labs and Chef devops tools, and like Ansible’s Tower, are part of this process as they give you a programmatic platform to build and manage configurations across a data center. You can’t have an SDDC without an automated environment, the complexity needs programmatic control – of network topology, of storage implementations, and of virtual machine configurations.

A fully automated data center has other benefits: it can simplify disaster recovery. The same scripts that automate your SDDC can be used to set up a disaster recovery site, and keep its applications and services synchronized with your main site.

There’s also the option of keeping costs down by using cloud-hosted disaster recovery, using public cloud services that use the same underlying virtualization technologies as your SDDC. 
Microsoft offers an Azure-hosted Site Recovery service that uses System Center orchestration run books, while VMware recently launched its own public cloud with support for Vsphere, and companies like RackSpace offer OpenStack-based services.

While many of the components needed to deliver an SDDC are here, they’re not yet integrated enough to deliver the full private cloud vision. It’s not just about the tools and services that administrators need, it’s also about the portals and web pages end users want to interact with the services an SDDC offers. Like the public cloud, an SDDC private cloud is an iceberg, with simple interactions hiding the underlying complexity.

Adding a new virtual server to an SDDC shouldn’t require users to reconfigure networks or storage: they should just be able to click a button and have a server waiting for final configuration and use. Administrators should also be able to ensure that those servers have a predictable configuration, so that their operation doesn’t affect the rest of the data center. It’s a balance that’s hard to visualize; but that’s critical for effective operation of a private cloud.
It’s nearly impossible to separate the private cloud IaaS model from SDDC, as one depends on the other. You can have an SDDC without a private cloud, using it to give you a predictable, manageable infrastructure with the reliability that comes from a virtualized infrastructure, but once you’ve gone that far, it’s only a short step to a fully self-service private cloud.

You can start to deploy elements as part of a virtualization migration, or add them as features to an existing virtual infrastructure – perhaps as a way of adding self-service VM deployment.

As you add more management and orchestration tooling, you can start to run parts of your data center as an SDDC, letting you prove the concept before a full launch.