The data center industry is where I became established in business, so it was a pleasure to be invited to participate in the opening plenary session at DatacenterDynamics Converged Europe 2015. Together with Brandon Butterworth, chief scientist for the BBC, Cole Crawford, the former head of Open Compute, and Kushagra Vaid, general manager of server engineering at Microsoft we talked about the emerging role of Edge Computing and the impact that’s going to have on data centre infrastructure.

A couple of stark statistics were presented: Firstly the exponential global growth of data which is anticipated over the course of the coming five years – from 4.4ZB in 2013 to 44ZB by 2020. We are indeed not far from the age of the Yottabyte. Secondly, the prediction that if we continue to process and store data in the way we do today, by 2040 the entire global energy supply will be consumed by large scale data centres.

edge data center thinkstock lead
– Thinkstock

Contemplating the Edge

Clearly, we cannot continue the way we are going. If we look across the board at what’s happening in terms of data, the way it’s being generated, analysed and consumed, increasingly we need to move content closer to the customer. And it was into this context that I introduced the term DC-PoP.

The idea for DC-PoP came into my mind when I was thinking about the car company Uber. Like it or loathe it, Uber is smart at connecting supply and demand; getting more service providers onto the road when demand (and therefore the ability to profit) is surging. In addition to providing drivers with flexible and independent work, they introduced an app – UberPop – to link users to drivers without a professional taxi or chauffeur license. This all works to enable Uber to flex the availability of their services by incentivising people to become a driver for a day.

I think that the term DC-PoP very accurately depicts the way that a new generation of microdatacentres – microDCs – is emerging to meet the accelerating need for local or remote data processing capability. If you’re a customer on the ground and you have to wait while data goes back into some central repository and come back again, a), it’s very expensive and perhaps more importantly, b), your experience of the service isn’t very good.

In fact, Brandon Butterworth confirmed more or less that. In determining the BBC’s data centre strategy, they’d very much come to the conclusion that a single mega data centre might simply represent a single, large potential point of failure. Multiple, distributed facilities might not only provide more resilience for services, they would also enable services to be tailored to individual customer preferences rather than taking a one size fits all approach.

Data which is being generated by what has come to be known as the Internet of Things, comes from different points of origin all over the world. Rather than utilize central processing and storage - which is the old way (and, by the way, quite appropriate for some applications) - it makes complete sense to provide local processing resource for data which needs to meet an immediate real time need. The question is, how do you do that locally?

The solution, of-course, is a MicroDC architecture that could provide consumers and business users a tailored data centre environment according to needs of their application, easily and with speed and agility. An application specific data centre that can be turned on at the very moment of need. Now imagine a global network of interconnected MicroDCs that allow you flex your processing capability according to your exact demand and load. And that, in a nutshell, is my vision of the future with DC-PoP.

Tanuja Randery is President of Schneider Electric, UK and Ireland