Cookie policy: This site uses cookies (small files stored on your computer) to simplify and improve your experience of this website. Cookies are small text files stored on the device you are using to access this website. For more information on how we use and manage cookies please take a look at our privacy and cookie policies. Some parts of the site may not work properly if you choose not to accept cookies.

sections

The future of virtualization

  • Print
  • Share
  • Comment
  • Save

Cloud computing and virtualization go hand in hand. Virtualization is cloud’s foundation and cloud computing software – such as OpenStack, for example – “is just the bit that sits on top of the house to manage this”, as Rackspace’s VP of technology and product, Nigel Beighton, will tell you.

But as the Cloud evolves, so too must virtualization to support more IO-intensive network and storage workloads, and to ensure that open standards being developed across the industry can also be applied to hypervisor designs. Beighton says most clouds today run on virtualization technology that is ten years old. But work is taking place behind the scenes to revolutionize the way virtualization is done.

The market leader
VMware has the largest market share when it comes to virtualization – 60%, according to research firm IDC. VMware global field CTO Paul Strong says that today most adoption is being driven by consolidation and the desire for private and hybrid cloud environments. But market share does not mean VMware can rest on its laurels. New demands, such as the software-defined data center (SDDC), are pushing virtualization technology forward.

“We have spent the last 15 years focused primarily on separating the application from the compute, or server, infrastructure. However, most applications are in fact network-distributed (think multi-tier apps). So, even though you can switch an instance (say a database server) on almost instantly, it may take hours, days or even weeks to re-program firewalls and load balancers, to assign new IP addresses, or to set up new VLANs for the complete application,” Strong says.

VMware purchased Nicira in July last year for its software-defined network (SDN) capabilities. “This (SDN) provides us with an opportunity to define a container, a virtual data center or virtual application for network-distributed applications,” Strong says.

He says this will eventually mean a container can be “manipulated” in much the same way that VMs are today, but for complete applications. And it is this that provides the foundation for the SDDC.

“SDDC is about extending virtualization from servers to networking and storage, separating the application from the infrastructure and encapsulating it in a container. Once these applications are in these containers, we can automate the lifecycle of the containers and thus, by proxy, the applications within them. Large enterprises have thousands of applications. It is nigh on impossible to automate provisioning and management of each of these individually. But when you place each of these applications within its own container, it looks more or less the same from the perspective of the daily operations, such as provisioning, moving to scale up or down, moving for availability, moving to and from the Cloud.”

Strong says the SDDC will place new demands on hypervisors, which will have to handle IO-intensive network and storage virtual appliances, as well as traditional applications. The hypervisor of tomorrow will need to be denser in order to handle increasing processor cores and threads.

New hypervisor designs
Some new players entering the hypervisor market are basing their initial product offerings around cloud. ZeroVM is one. It claims to have the “first hypervisor specially designed” for the model, with deployment speed, application isolation and efficiency built into its features. It claims a ZeroVM takes 5ms to create, making it possible to separate every single task into its own container. It only virtualizes parts of the server that are required to do the work. “Existing clouds are giant server farms that are spending precious resources virtualizing unneeded things,” ZeroVM’s website says.

The hypervisor uses Unix-style processes that communicate through pipes such as VMware, XEN and KVM, and claims to create a new VM for every single incoming request. It can also aggregate many physical servers and represent them as a single virtual system, or represent a number of virtual systems backed up by any number of physical servers. The end result for ZeroVM is a lightweight hypervisor that offers smaller footprint, allowing you to, hypothetically, divide the hypervisor into 10,000 processes instead of 10 servers, bringing virtualization down to another level.

Beighton says such developments are moving us toward a new age of virtualization. “It is the concept of moving away from just creating servers to deconstructing applications and virtualizing at the application and right down to the process and user level. It is at this point when you know cloud will become very different.”

Open cloud
Cloud has already made demands different, especially when it comes to open standards. OpenStack is just one example of the popularity of the open model. IBM CTO Matt Hogstrom, who is working on IBM’s new software-defined environment program, which is looking at policy-based management of workloads (read more about this in the FOCUS 31 iPad edition at DCDFOCUS), says one of the biggest challenges VMware will have in future will be removing vendor lock-in to create more fluid delivery options through the common use of programming languages.

Strong says VMware is working with a number of open standards and community initiatives from the Distributed Management Task Force (behind the open virtualization format for movement across different vendor’s hypervisors) and OpenStack, which it contributes to through Nicira. These will be another influential factor on the future generations of hypervisor, designed especially for the Cloud.

 

Over the next two days, FOCUS will be covering the latest announcements from VMworld Europe in Barcelona. Sign up to our daily newsletter to hear the latest news. Or you can read more about virtualization in FOCUS magazine, edition 31. Available as a digital edition here.

 

Related images

  • VMware's Paul STrong

Have your say

Please view our terms and conditions before submitting your comment.

required
required
required
required
required
  • Print
  • Share
  • Comment
  • Save

Webinars

  • Do Industry Standards Hold Back Data Centre Innovation?

    Thu, 11 Jun 2015 14:00:00

    Upgrading legacy data centres to handle ever-increasing social media, mobile, big data and Cloud workloads requires significant investment. Yet over 70% of managers are being asked to deliver future-ready infrastructure with reduced budgets. But what if you could square the circle: optimise your centre’s design beyond industry standards by incorporating the latest innovations, while achieving a significant increase in efficiency and still maintaining the required availability?

  • The CFD Myth – Why There Are No Real-Time Computational Fluid Dynamics?

    Wed, 20 May 2015 14:00:00

    The rise of processing power and steady development of supercomputers have allowed Computational Fluid Dynamics (CFD) to grow out of all recognition. But how has this affected the Data Center market – particularly in respect to cooling systems? The ideal DCIM system offers CFD capability as part of its core solution (rather than as an external application), fed by real-time monitoring information to allow for continuous improvements and validation of your cooling strategy and air handling choices. Join DCIM expert Philippe Heim and leading heat transfer authority Remi Duquette for this free webinar, as they discuss: •Benefits of a single data model for asset management •Challenges of real-time monitoring •Some of the issues in CFD simulation, and possible solutions •How CFD can have a direct, positive impact on your bottom line Note: All attendees will have access to a free copy of the latest Siemens White Paper: "Using CFD for Optimal Thermal Management and Cooling Design in Data Centers".

  • Prioritising public sector data centre energy efficiency: approach and impacts

    Wed, 20 May 2015 11:30:00

    The University of St Andrews was founded in 1413 and is in the top 100 Universities in the world and is one of the leading research universities in the UK.

  • A pPUE approaching 1- Fact or Fiction?

    Tue, 5 May 2015 14:00:00

    Rittal’s presentation focuses on the biggest challenge facing data centre infrastructures: efficient cooling. The presentation outlines the latest technology for rack, row, and room cooling. The focus is on room cooling with rear door heat exchangers (RHx)

  • APAC - “I Heard It Through the Grapevine” – Managing Data Center Risk

    Wed, 29 Apr 2015 05:00:00

    Join this webinar to understand how to minimize the risk to your organization and learn more about Anixter’s unique approach.

More link