Archived Content

The following content is from an older version of this website, and may not display correctly.

Cloud computing and virtualization go hand in hand. Virtualization is cloud’s foundation and cloud computing software – such as OpenStack, for example – “is just the bit that sits on top of the house to manage this”, as Rackspace’s VP of technology and product, Nigel Beighton, will tell you.

But as the Cloud evolves, so too must virtualization to support more IO-intensive network and storage workloads, and to ensure that open standards being developed across the industry can also be applied to hypervisor designs. Beighton says most clouds today run on virtualization technology that is ten years old. But work is taking place behind the scenes to revolutionize the way virtualization is done.

The market leader
VMware has the largest market share when it comes to virtualization – 60%, according to research firm IDC. VMware global field CTO Paul Strong says that today most adoption is being driven by consolidation and the desire for private and hybrid cloud environments. But market share does not mean VMware can rest on its laurels. New demands, such as the software-defined data center (SDDC), are pushing virtualization technology forward.

“We have spent the last 15 years focused primarily on separating the application from the compute, or server, infrastructure. However, most applications are in fact network-distributed (think multi-tier apps). So, even though you can switch an instance (say a database server) on almost instantly, it may take hours, days or even weeks to re-program firewalls and load balancers, to assign new IP addresses, or to set up new VLANs for the complete application,” Strong says.

VMware purchased Nicira in July last year for its software-defined network (SDN) capabilities. “This (SDN) provides us with an opportunity to define a container, a virtual data center or virtual application for network-distributed applications,” Strong says.

He says this will eventually mean a container can be “manipulated” in much the same way that VMs are today, but for complete applications. And it is this that provides the foundation for the SDDC.

“SDDC is about extending virtualization from servers to networking and storage, separating the application from the infrastructure and encapsulating it in a container. Once these applications are in these containers, we can automate the lifecycle of the containers and thus, by proxy, the applications within them. Large enterprises have thousands of applications. It is nigh on impossible to automate provisioning and management of each of these individually. But when you place each of these applications within its own container, it looks more or less the same from the perspective of the daily operations, such as provisioning, moving to scale up or down, moving for availability, moving to and from the Cloud.”

Strong says the SDDC will place new demands on hypervisors, which will have to handle IO-intensive network and storage virtual appliances, as well as traditional applications. The hypervisor of tomorrow will need to be denser in order to handle increasing processor cores and threads.

New hypervisor designs
Some new players entering the hypervisor market are basing their initial product offerings around cloud. ZeroVM is one. It claims to have the “first hypervisor specially designed” for the model, with deployment speed, application isolation and efficiency built into its features. It claims a ZeroVM takes 5ms to create, making it possible to separate every single task into its own container. It only virtualizes parts of the server that are required to do the work. “Existing clouds are giant server farms that are spending precious resources virtualizing unneeded things,” ZeroVM’s website says.

The hypervisor uses Unix-style processes that communicate through pipes such as VMware, XEN and KVM, and claims to create a new VM for every single incoming request. It can also aggregate many physical servers and represent them as a single virtual system, or represent a number of virtual systems backed up by any number of physical servers. The end result for ZeroVM is a lightweight hypervisor that offers smaller footprint, allowing you to, hypothetically, divide the hypervisor into 10,000 processes instead of 10 servers, bringing virtualization down to another level.

Beighton says such developments are moving us toward a new age of virtualization. “It is the concept of moving away from just creating servers to deconstructing applications and virtualizing at the application and right down to the process and user level. It is at this point when you know cloud will become very different.”

Open cloud
Cloud has already made demands different, especially when it comes to open standards. OpenStack is just one example of the popularity of the open model. IBM CTO Matt Hogstrom, who is working on IBM’s new software-defined environment program, which is looking at policy-based management of workloads (read more about this in the FOCUS 31 iPad edition at DCDFOCUS), says one of the biggest challenges VMware will have in future will be removing vendor lock-in to create more fluid delivery options through the common use of programming languages.

Strong says VMware is working with a number of open standards and community initiatives from the Distributed Management Task Force (behind the open virtualization format for movement across different vendor’s hypervisors) and OpenStack, which it contributes to through Nicira. These will be another influential factor on the future generations of hypervisor, designed especially for the Cloud.

 

Over the next two days, FOCUS will be covering the latest announcements from VMworld Europe in Barcelona. Sign up to our daily newsletter to hear the latest news. Or you can read more about virtualization in FOCUS magazine, edition 31. Available as a digital edition here.