Archived Content

The following content is from an older version of this website, and may not display correctly.

AMBROSE MCNEVIN, FOCUS: In this panel discussion, where we will deal with existing disciplines in the stack breaking it down into facilities and IT, we are looking at 2020 and beyond. I heard current data center operations described as museums of IT run on industrial revolution infrastructure. With this in mind, what would your definition of the software-defined data center (SDDC) be?

Dr Ian Bitterlin, Ark Continuity: I have kind of reached the conclusion that an SDDC is a true data center infrastructure management (DCIM) hypervisor controlling what is going on in the IT stack. I was then asked about the control of batteries and uninterruptible power supply (UPS). I don’t think the software needs to do this. I think I can see that the SDDC is probably what is going to happen because, quite simply, that is the only way we are going to be able to control the amount of data fl owing in and out of and being crunched in the facility.

David Gauthier, Microsoft: Is it the DCIM or just the core applications that are running in the virtualization  environment, moving loads left and right? I also think it can be smarter to reallocate load to another data center rather than spin up the generators for the data center, reducing the cost of fuel and the carbon footprint. This means the data center is starting to move workloads around rather than switching on the electrical gear.

Ed Ansett, i3: I believe we will continue to evolve towards a model where you will see a very high degree of automation in the data center. I certainly don’t think we will get to a point where automation does absolutely everything. There will have to be some type of human control over it, but this will be considerably less. So it’s definitely the way the industry must go for economic reasons and risk reasons.

David Gauthier: I think the amount of human intervention is going to decline over time. The really cool thing, when we start looking at this data center that is software-defined, is that developers have some really awesome tools to be able to monitor the data. You can have debug routines with software that puts automation programs to shame. At the end of the day, the stuff that is actually happening in the data center – whether it is parallel switchgear or SynapSense tuning the amount of flow – it is a software tool, but today software automation is not linked in with this. If we can marry these two worlds, and use the same debug stats, it is actually exciting – higher quality, higher availability, lower cost.

 

Ambrose McNevin: How will facilities develop? Will we be building intelligence into battery strings or switchgear, and will it be responsive to workload?

Ian Bitterlin: I think the problem will be defining what the workload is going to look like. We have a situation at the moment where 20 or 25% of the world has internet access and generates data, the other 75% of the world wants it. The 25% that has it is on an exponential curve of data generation, 20% ahead of Moore’s Law. We are going to run out of power unless we have paradigm changes – in nanophotonics silicon, etc. We are going to get to the matrix. No one can predict where or when that is going to happen. If we go to 2020, that is only two hardware refreshes away, but 2030 is something else, because if we haven’t had a paradigm shift in silicon, we will be running out of power anyway, or we restrict the world’s capacity to use digital services. In terms of the data center, these will carry on getting bigger. It will be just like it was in 1990/91, when you had these really lovely big halls with mainframes in them, and in the space of 18 months there was a little black box in the corner shivering. Everyone said you will never build data centers here, and one year later it was full of boxes again, this time they were black instead of blue, and there was this rolling wave of technology in and out of these big rooms. I think in future you will see a big white space with a barbed-wire fence around it, and maybe with real guns next time because there will be real terrorists then. We will see that rolling wave of technology based on other platforms and these guys will be interested in other things instead of form factor.


Ambrose McNevin: And facilities?

Ed Ansett: Facilities will develop with the technology. I think the most important thing that has happened recently is the work with Intel and solid state drives (SSDs) in terms of write speed and so on, because that’s the weak link in the chain that’s driving ASHRAE’s thermal envelope. If we go to SSDs, they will probably open up that window considerably and that will, from a facilities’ point of view, mean we could end up, even in very warm climates, with the removal of refrigeration.

Also on the facilities side is the mechanical load. The problem we have with electrical stuff is that, mechanically, the time constants involved are much larger in terms of changing electricity – it is on/off. Mechanically, what we want to be able to do – and the industry has already started to understand this – is move to a world of dynamic resource allocation, where the cloud enables customers to pay per use instead of static payment. That means mechanically we need to be able to adapt to that. This isn’t so far off and I certainly think this is going to happen by 2020, where we shut down entire rows, even

entire data centers, and bring them back up on demand based upon requirement.

David Gauthier: Drives and spinning disks are the things locking our temperature at Microsoft as well. But I think the key thing as a cloud provider is really about scale and running hundreds of thousands, maybe millions, of machines. You have this huge amount of overhead around refreshing hardware. Forget the failed stuff – you can either figure out how to do that one at a time or in future, but refreshing hardware and going in and taking out 10 or 20MW worth of machines at a time is something we are spending a lot of time thinking about. It is part of our strategy. I think in future the manufacturer will have modular components so you can refresh this equipment and it will be industrialized – it is just vanilla power coming in. I don’t know if you are going to care too much about UPSs or generators and mechanical. I think if you have machines sitting there depreciating on your books doing nothing, then that is waste.


Ambrose McNevin: I hear people say with the data center you are building something that lasts 15 to 20 years, for an IT stack you are yet to recognize. Virtualization came out of leftfield. Is there another step change coming?

Ed Ansett: The next step is this business of dynamic resource allocation – cloud and software services. VMotion is allowing the industry to start doing things in terms of being able to really utilize assets properly. We are used to very static, siloed environments. Virtualization was the most important thing that happened in the last ten years, and this is possibly the next important thing.


Ambrose McNevin: Where will this all live?

David Gauthier: We deployed containers in Chicago – it takes a day to get them off the truck and placed and plugged in. That is hugely valuable to a cloud provider. The challenge is, when you keep one of these things for three years you are depreciating it over what you would be doing with a traditional data center. So if we can get containers to really be 40 ft computers rather than what they are now, which is a small data center, I think we have a really good opportunity to go there.

Ian Bitterlin: It won’t all live in containers because nearly everywhere in the world, apart from Japan, land cost is the cheapest element in the data center. You touched on the idea of a container being a small data center, I actually think of it as a big cabinet. This idea of being able to roll up and plug in is misleading because, before you do that, you have to spend the same 24 months building the spine and the network and power provision. I think we will still be building large white boxes in a modular way.