Software is becoming ever more sophisticated, adding performance analytics to policy rules-based management capabilities, and extending that into the realm of AI. The true autonomous, lights-out data center, where the facility becomes a commoditized utility, may not be that far off, and the global growth in data towards the zettabyte era will accelerate its arrival.

The role of hardware is both shriveling and dumbing down, with intelligence being increasingly software-led. But has the ability of software to deliver the true software-defined data center (SDDC) been overstated?

A few years back, cloud was a pipe dream. Cloud required network, storage and compute to be combined in a cohesive, integrated, software-managed infrastructure. That demanded a highly abstracted (virtualized), automated, policy-based system that combined workload management, agile infrastructure provisioning, failover, disaster recovery and security. And that package simply didn’t exist.

The virtualization and cloud pioneers knew they could pool IT resources, but in the early days they still hadn’t really given a thought to creating physical data center infrastructure, including the guts of the critical environment: power and cooling (thermal management).

Now the SDDC is in labor after a very long pregnancy, promising to deliver a software- and data-led unified toolbox that presents the data center as an abstracted private cloud, available to multiple customers, with hybrid cloud automation. But the critical environment is still not there.

Software-defined data center
– DCD

Waiting for zettastructure

SDDC is needed, because digital transformation is here. In 2016 the world will hit 1,000 exabytes (one zettabyte) of data traffic on the internet, says Cisco’s Visual Networking Index, with internet data globally projected to grow at a CAGR of 26 percent to 2.3ZB by 2020.

The obvious drivers for this include the growth in mobile/wireless devices and applications; rich (including interactive) streaming media; the Internet of Things (IoT), big data and analytics; robotics; drones and autonomous vehicles; 3D printing; VR/AR; cognitive computing and much more. We believe zettabytes need “zettastructure” – open source, software-defined, data-driven, hyperscale and autonomous infrastructure: true SDDC.

A MarketsandMarkets study estimates that the global market for SDDC will bring in $25.6bn in revenue in 2016, growing at a CAGR of 26.6 percent to $83.2bn by 2021. So SDDC is a real market, growing really fast. But that only tells part of the story.

These figures pull together the components of the IT stack only: software-defined networking (SDN) and, we assume, its companion, network function virtualization (NFV); software-defined storage (SDS); and software-defined computing (SDC). These pillars – SDN/NFV, SDS and SDC, combined as software-defined infrastructure (SDI) – make up the IT stack. The study also includes services, meaning consulting, integration and deployment. But the study has a big omission: it only counts the IT infrastructure stack, north of the rack, the logical side, and not the south, or physical MEP infrastructure side.

In our opinion, mechanical and electrical infrastructure (thermal and power management) systems must also become software-defined, where the software is data-driven, predictively analytical, policy-based and tightly integrated into IT-stack performance management.

The true jump to SDDC occurs only when various elements of software automation and data-driven analytical intelligence are brought to bear on the physical critical-environment infrastructure management.

Critical environment functions have been handled under the catch-all category of data center infrastructure management (DCIM) and, more recently, a higher order of function known as data center service optimization (DCSO), which seeks to integrate DCIM with IT service management (ITSM). However it is done, we need to see an end to the old silos.

Down with silos

For years in the industrial world, IT and operational technology (OT – the world of control engineering) have been treated as separate technical disciplines with disparate cultures. Now fresh thinking and the arrival of new technologies are giving IT the ability to automate and use OT data.

There are those who don’t think we need to integrate and software-define the full stack. DCD disagrees. During the past decade we have learned the folly of treating the logical and physical sides of the critical-environment world as different countries (if not planets).

When design and engineering professionals on both sides of the border speak two different languages, this creates threats to uptime, availability, resiliency and efficient IT performance. Those risks still exist in far too many enterprises.

In the early days of virtualization, the pace of change was hard for facilities engineers to keep up with, as server, storage and networking technology advanced with every server refresh. Power and cooling were static for the life of the facility – at least a decade. That is no longer true.

For now, the true SDDC may be limited to those organizations with deep pockets and monolithic applications – the vertically integrated hyperscalers and cloud services providers that can push out the boundaries of data center-as-a-service. But anyone requiring DevOps-style internet-facing agility at the application and workload level will increasingly want these characteristics from its in-house or outsourced data center-as-a-service provider. To meet the demands placed on them, data centers must become open-source, full-stack integrated, software-defined and autonomous, right down to the lowest level of their infrastructure.   

None of the component infrastructure architectures are immune to all this. And technology advances won’t stop. Silicon photonics is now ready for rapid market adoption.

Blockchain, the tech behind Bitcoin and other cryptocurrencies, could find its way into all kinds of industries and applications such as FinTech, electronic voting, smart micro-contracts and provenance verification for high-value art and gems. DCD predicts blockchain will be the common methodology for managing cloud-based workload capacity-exchange contracts.

Meanwhile, other technologies are being pumped up by venture capitalists, regardless of their actual viability. DNA-strand-based data storage doesn’t yet exist outside of the R&D lab, but that “market” is already valued at $1bn. Quantum computing is another non-existent market that already has a five-year out valuation of $5bn.

In the future, to accommodate rapid growth in demand and shifts in the underlying platform technologies, we must move towards a world where cloud services and capacity will not require human touch, except where humans are unwilling to let control go to software.

“Software is eating the world,” Marc Andreessen famously opined in the ancient history of 2011. That is now coming true in ways he could not have predicted.

This article appeared in the July/August issue of DatacenterDynamics magazine.