Cookie policy: This site uses cookies (small files stored on your computer) to simplify and improve your experience of this website. Cookies are small text files stored on the device you are using to access this website. For more information on how we use and manage cookies please take a look at our privacy and cookie policies. Some parts of the site may not work properly if you choose not to accept cookies.

sections

Hybrid cloud is about more than savings

  • Print
  • Share
  • Comment
  • Save

If virtualization and private cloud promise cost reductions, hybrid cloud is about something else: infrastructure flexibility and reliability. Gartner predicts that almost half of large enterprises will have deployed hybrid clouds by the end of 2017. Their main motivation will not be cost, but agility.

 

Gartner analyst Thomas Bittman says: “Taking the next step of adding usage metrics, self-service offerings and automated provisioning requires investment in technologies without a significant reduction in operational cost. With this in mind, the driving factor for going that next step should primarily be agility.”

 

IT workers have to understand how agility can benefit current services and drive the creation of new services, Bittman says. And agility means different things to different users.

 

Eric Roch, principal of cloud and integration at Perficient, an IT consulting firm, says capacity bursting and disaster recovery are very popular use cases for hybrid cloud today. Another big one is DevOps (development and operations).

 

Hybrid infrastructure enables the development and operations teams to work together and quickly release and iterate through the software development process. “During that whole development process, it’s much quicker to deploy development in the Cloud,” Roch says.

 

Business intelligence is another good application for hybrid cloud. “We’ve done things like have large volumes of data for business intelligence in the Cloud and integrate that back with on-premise data … to get the total solution,” Roch says. This provides a more comprehensive answer to a business-intelligence question.

 

Many of these deployments are undertaken because of compliance concerns. Healthcare companies, for example, may have patient data stored on premise (for privacy compliance), while overall patient-population data resides in a public Cloud. Running analytics across these two types of data is a fairly common choice for healthcare organizations, Roch says.

 

Hybrid infrastructure is also popular in retail, where a company will integrate data about customer sentiment toward a certain product from social-media sites with in-house data to make decisions relative to that product. It is quite rare for customers to set up these types of analytics solutions on their own. Today, this is done mostly through service offerings. “It’s somewhat leading-edge for companies to do it themselves,” Roch says.

 

Technologies like Hadoop and cloud services are taking businesses away from traditional data warehousing and analytics vendors like Terradata. Roch says he has seen customers turn down traditional warehousing appliances in favor of service solutions combining Hadoop and cloud storage.

 

Some applications, of course, are a natural fit for hybrid cloud. “There are certain things that are easy to move to the cloud,” Roch says.Customer relations management (CRM) is one of these. Here, a cloud service like Salesforce.com is integrated with in-house data.

 

The hybrid advantage

So why do companies choose hybrid infrastructure over 100% cloud? The main reasons are security and compliance, Roch says. A hybrid infrastructure offers the best of both worlds: the security of on-premise data storage and the flexibility of public cloud.

 

This attractiveness is not lost on the IT vendors, all of whom are building platforms that are easier to set up as elements of hybrid infrastructure. The likes of IBM and HP are building capabilities like web server, load-ballancer and database templates into their server products, for example.

 

Storage vendors are doing the same thing, and everybody is talking about ease of deployment in the Cloud. “That’s where things are going,” Roch says.

 

What makes it difficult now is lack of interoperability. Since this is all new, there are competing standards out there. There is VMware, there is Amazon Web Services, there is OpenStack. Right now, you cannot burst onto one platform one day and switch to another one the next. “APIs don’t match up,” Roch says. “That’s a barrier to making things work. Interoperability between cloud providers is not yet mature.”

 

It manifests itself by forcing end users to lock themselves into single providers. Ideally, you want to avoid being locked into one vendor, “but right now, it’s just not the case,” he says.

 

Providers that have colocated or dedicated servers and cloud infrastructure offerings win in this scenario, while the customers lose the freedom to mix and match providers.

 

New set of skills

Even when APIs do match, setting up a cloud infrastructure and operating it typically involves a few changes to the IT organization. Especially when it comes to administering workloads.

 

With cloud, there is no more request filing for servers and months-long deployment turn-around times. There is also no manual management of day-to-day IT maintenance tasks. The IT admin’s job is no longer to feed CDs into servers or download files and load them onto the hardware. “It’s really managing a larger workload,” Roch says, “and deciding how a workload is going to be distributed.” These jobs require skills around relatively new technologies such as the open-source cloud architecture OpenStack or VMware’s vCloud.

 

A mature understanding of overall cloud-infrastructure concepts is also important. Understanding the geographical location of the data centers hosting your cloud servers, for example, can be crucial. “You need to understand the topology,” Roch says. “It’s important, but I don’t know that enough companies pay attention to it.”

 

Specifically, it is important to be able to recognize potential single points of failure in cloud infrastructure. If your application is deployed in multiple AWS regions, for example, but storage for that application is in one location, for example, then that geography-based resiliency is not really there.

Another big consideration affected by data center location is latency. “Looking at distribution, caching, single points of failure and availability – these should be parts of that equation,” Roch says.

 

Hybrid as differentiation

For Farelogix, a company that provides technology services to airlines, a hybrid cloud was a way to make the company stand out among the competition.

 

The company provides airlines with technologies to support new ways of selling and packaging products. Much of its business is back-end components of airline websites that allow travel sites like Travelocity to hook into airline ticketing systems. Put simply, it is a technology shop that provides custom solutions for customers.

 

The space Farelogix operates in is getting quickly populated by competitors, Sue Carter, SVP of marketing, says. These competitors include the likes of Google. Cater says this means technology is of critical importance to the company’s future success.

 

Farelogix chose Verizon Terremark to provide its hybrid cloud infrastructure. Nadesha Ranasinghe, director of IT services at Farelogix, says its primary application stack is running in Verizon’s multi-tenant enterprise cloud, while its database environment is running on physical equipment on a Verizon data center floor.

 

Farelogix has about nine racks hosted at Verizon’s Miami NAP of the Americas data center and its cloud disaster-recovery site is in the provider’s Virginia facility. The infrastructure is set up in an active-active mode, which means all transactions run at both facilities. Farelogix has a pool of reserved cloud capacity with the option to burst.

 

For Ranasinghe, the ability to stay with the same provider for both dedicated and cloud infrastructure is a positive thing. Because the provider is the same, time-to-market for the new hybrid infrastructure was short. Since the company had been Verizon’s colocation customer in Miami, integration with the same provider’s cloud hosted at the other data center was relatively quick and painless.

 

Another benefit of using Verizon only is the direct link between its data centers. Had Farelogix chosen two different vendors, traffic between cloud and on-premise would have to go through the Internet at an intolerable latency, Ranasinghe explains. The connectivity would also cost more had the company gone to a third-party carrier for it.

 

Hybrid to the rescue

For Schlitterbahn, a New Braunfels, Texas-based operator of water parks, the decision to set up a hybrid infrastructure for the purpose of bursting into the cloud came as a result of a crisis.

 

In the summer of 2011, on the 4th July weekend (one of the company’s busiest times), its website crashed because it could not handle the volume of traffic that was coming through.

 

Pat Symchych, Schlitterbahn’s web developer, says that for the company, whose demand is seasonal, the ability to burst into cloud when it needs to has resulted in more resilient services and savings. When traffic drops, she spins down all the unnecessary cloud VMs and she doesn’t pay for them when they are not in use.

 

“We do save money too in the overall picture because I kick servers back off when my traffic drops,” she says.

 

Prior to the 2011 crash, the company’s website was hosted by a provider in a shared environment, Symchych says. Schlitterbahn had gone through a firm that built a Drupal website for them and they handled the hosting as well. When the demand spike caused the site to crash, Symchych put up a temporary website and hired a company to migrate the site to a hybrid cloud infrastructure by Rackspace.

 

In October, when we spoke with Symchych, Schlitterbahn had just closed its water parks because the season had ended. Its dedicated environment at a Rackspace data center was running on two servers. Next summer, when the season opens again, the infrastructure will again be temporarily scaled.

 

This article first appeared in the 32nd edition of DatacenterDynamics FOCUSmagazine. Follow the link for a free subscription.

Related images

  • Schlitterbahn, one of the largest water park operators in the US, uses hybrid cloud services by Rackspace, hosted in a data center like this

Have your say

Please view our terms and conditions before submitting your comment.

required
required
required
required
required
  • Print
  • Share
  • Comment
  • Save

Webinars

  • Do Industry Standards Hold Back Data Centre Innovation?

    Thu, 11 Jun 2015 14:00:00

    Upgrading legacy data centres to handle ever-increasing social media, mobile, big data and Cloud workloads requires significant investment. Yet over 70% of managers are being asked to deliver future-ready infrastructure with reduced budgets. But what if you could square the circle: optimise your centre’s design beyond industry standards by incorporating the latest innovations, while achieving a significant increase in efficiency and still maintaining the required availability?

  • The CFD Myth – Why There Are No Real-Time Computational Fluid Dynamics?

    Wed, 20 May 2015 14:00:00

    The rise of processing power and steady development of supercomputers have allowed Computational Fluid Dynamics (CFD) to grow out of all recognition. But how has this affected the Data Center market – particularly in respect to cooling systems? The ideal DCIM system offers CFD capability as part of its core solution (rather than as an external application), fed by real-time monitoring information to allow for continuous improvements and validation of your cooling strategy and air handling choices. Join DCIM expert Philippe Heim and leading heat transfer authority Remi Duquette for this free webinar, as they discuss: •Benefits of a single data model for asset management •Challenges of real-time monitoring •Some of the issues in CFD simulation, and possible solutions •How CFD can have a direct, positive impact on your bottom line Note: All attendees will have access to a free copy of the latest Siemens White Paper: "Using CFD for Optimal Thermal Management and Cooling Design in Data Centers".

  • Prioritising public sector data centre energy efficiency: approach and impacts

    Wed, 20 May 2015 11:30:00

    The University of St Andrews was founded in 1413 and is in the top 100 Universities in the world and is one of the leading research universities in the UK.

  • A pPUE approaching 1- Fact or Fiction?

    Tue, 5 May 2015 14:00:00

    Rittal’s presentation focuses on the biggest challenge facing data centre infrastructures: efficient cooling. The presentation outlines the latest technology for rack, row, and room cooling. The focus is on room cooling with rear door heat exchangers (RHx)

  • APAC - “I Heard It Through the Grapevine” – Managing Data Center Risk

    Wed, 29 Apr 2015 05:00:00

    Join this webinar to understand how to minimize the risk to your organization and learn more about Anixter’s unique approach.

More link