Archived Content

The following content is from an older version of this website, and may not display correctly.
It would be fair to say cloud computing has changed the nature of the data center. But for all of the technological advances we have made, our IT infrastructure is still limited by specific hardware, proprietary platforms and closed systems. These limitations will fade in the next five years.
 
State of the union
By the time this reaches print, Dell will have launched PowerEdge FX2, an architecture that lets customers mix and match eight modules of compute, storage and networking components inside a 2U rack.

Meanwhile, VMware’s EVO:RAIL bundles four nodes of hardware, in a 2U package. The appliances themselves are set to be manufactured by Dell, HP, EMC, Fujitsu and Supermicro, among others.

OpenStack is gaining popularity as a way to run a flexible cloud on commodity hardware – but without the ‘bulletproof’ SLAs offered by traditional vendors.

IT infrastructure is becoming more customizable, and less complex. Very soon, anyone will be able to run an enterprise-level cloud server, as long as they can cobble together a few CPUs, some hard drives and a few sticks of memory. Traditional hardware manufacturers are joining the party, each claiming their latest product is more ‘open’ than that of their competitors.

Taking all these trends into account, the next chapter in the history of cloud computing looks more interesting than anything we’ve seen before.

You ain’t seen nothing yet
Cloud collects separate IT resources into a common pool, to be shared and managed from a single point. It minimizes costs while improving utilization, and enables pay-per-use models. Public cloud providers like Amazon, Microsoft and Google are building massive data centers in the Oregon High Desert, earning profits through economies of scale.

But there’s an alternative: networks of smaller, distributed data centers that can be built quickly and with less risk. Most importantly, they can be located closer to the end-user, beating one of the greatest challenges facing the public cloud – latency.

Matthew Finnie, CTO of the British data center operator Interoute, says latency could limit cloud applications much more than access to compute. He believes the answer is hundreds of distributed public cloud ‘pods’ modeled on the way the Internet itself works.

A distributed data center model allays another growing concern - data sovereignty. Regulation of the cloud market is only going to increase, and some European countries have already introduced rules that effectively force businesses to use or build local infrastructure.

Meanwhile, Egnyte’s head of Europe, Ian McEwan predicts a price war that might bring a nasty ending to cloud companies who run generous free trials but lots of non-payers.

Infrastructure is code
In both models, the cloud still needs servers, but the need for dedicated storage or networking equipment is less certain.

Dell, for example, believes in data centers filled with homogenous white-label boxes. It says that by 2020, as we approach the end of Moore’s Law, such boxes will triple the number of cores and the amount of available storage, offer 16 times more RAM and 15 times more network bandwidth than the servers of today. Raw compute performance is expected to grow by the factor of 20.

Servers of the future will need a new type of non-volatile memory too, to occupy the position between flash and DRAM. Such memory will have to be 50x-1000x faster than current generation flash, and the most promising candidates here are Phase Change Memory (PRAM) likely to appear around 2016, and Resistive RAM (RRAM) which is expected in 2018.

On the software side, the stack Docker-like application containers could gradually replace virtual machines. “There are two reasons for this: firstly, containers deliver continuous high performance by immediately auto-scaling servers,” explained Richard Davies, CEO of ElasticHosts. “Secondly, greater insight into how resources are used inside each container allows the cloud provider to bill users exactly for the amount used.”

Erich Morisse, director of cloud at Red Hat, told DCD some vendors will refocus their business around massive hardware manufacturing operations, offering low quality at a low price. Others will develop code to make use of the hardware, inevitably having to collaborate.

Will we see the demise of some blue chips that prove unable to adapt? Absolutely. Remember what happened to Digital Equipment and Novell? Eventually all compute will run in the cloud. End users will own commodity priced devices which serve one purpose – connecting with cloud services. That’s when cloud will truly mature.