Archived Content

The following content is from an older version of this website, and may not display correctly.

If virtualization and private cloud promise cost reductions, hybrid cloud is about something else: infrastructure flexibility and reliability. Gartner predicts that almost half of large enterprises will have deployed hybrid clouds by the end of 2017. Their main motivation will not be cost, but agility.

 

Gartner analyst Thomas Bittman says: “Taking the next step of adding usage metrics, self-service offerings and automated provisioning requires investment in technologies without a significant reduction in operational cost. With this in mind, the driving factor for going that next step should primarily be agility.”

 

IT workers have to understand how agility can benefit current services and drive the creation of new services, Bittman says. And agility means different things to different users.

 

Eric Roch, principal of cloud and integration at Perficient, an IT consulting firm, says capacity bursting and disaster recovery are very popular use cases for hybrid cloud today. Another big one is DevOps (development and operations).

 

Hybrid infrastructure enables the development and operations teams to work together and quickly release and iterate through the software development process. “During that whole development process, it’s much quicker to deploy development in the Cloud,” Roch says.

 

Business intelligence is another good application for hybrid cloud. “We’ve done things like have large volumes of data for business intelligence in the Cloud and integrate that back with on-premise data … to get the total solution,” Roch says. This provides a more comprehensive answer to a business-intelligence question.

 

Many of these deployments are undertaken because of compliance concerns. Healthcare companies, for example, may have patient data stored on premise (for privacy compliance), while overall patient-population data resides in a public Cloud. Running analytics across these two types of data is a fairly common choice for healthcare organizations, Roch says.

 

Hybrid infrastructure is also popular in retail, where a company will integrate data about customer sentiment toward a certain product from social-media sites with in-house data to make decisions relative to that product. It is quite rare for customers to set up these types of analytics solutions on their own. Today, this is done mostly through service offerings. “It’s somewhat leading-edge for companies to do it themselves,” Roch says.

 

Technologies like Hadoop and cloud services are taking businesses away from traditional data warehousing and analytics vendors like Terradata. Roch says he has seen customers turn down traditional warehousing appliances in favor of service solutions combining Hadoop and cloud storage.

 

Some applications, of course, are a natural fit for hybrid cloud. “There are certain things that are easy to move to the cloud,” Roch says.Customer relations management (CRM) is one of these. Here, a cloud service like Salesforce.com is integrated with in-house data.

 

The hybrid advantage

So why do companies choose hybrid infrastructure over 100% cloud? The main reasons are security and compliance, Roch says. A hybrid infrastructure offers the best of both worlds: the security of on-premise data storage and the flexibility of public cloud.

 

This attractiveness is not lost on the IT vendors, all of whom are building platforms that are easier to set up as elements of hybrid infrastructure. The likes of IBM and HP are building capabilities like web server, load-ballancer and database templates into their server products, for example.

 

Storage vendors are doing the same thing, and everybody is talking about ease of deployment in the Cloud. “That’s where things are going,” Roch says.

 

What makes it difficult now is lack of interoperability. Since this is all new, there are competing standards out there. There is VMware, there is Amazon Web Services, there is OpenStack. Right now, you cannot burst onto one platform one day and switch to another one the next. “APIs don’t match up,” Roch says. “That’s a barrier to making things work. Interoperability between cloud providers is not yet mature.”

 

It manifests itself by forcing end users to lock themselves into single providers. Ideally, you want to avoid being locked into one vendor, “but right now, it’s just not the case,” he says.

 

Providers that have colocated or dedicated servers and cloud infrastructure offerings win in this scenario, while the customers lose the freedom to mix and match providers.

 

New set of skills

Even when APIs do match, setting up a cloud infrastructure and operating it typically involves a few changes to the IT organization. Especially when it comes to administering workloads.

 

With cloud, there is no more request filing for servers and months-long deployment turn-around times. There is also no manual management of day-to-day IT maintenance tasks. The IT admin’s job is no longer to feed CDs into servers or download files and load them onto the hardware. “It’s really managing a larger workload,” Roch says, “and deciding how a workload is going to be distributed.” These jobs require skills around relatively new technologies such as the open-source cloud architecture OpenStack or VMware’s vCloud.

 

A mature understanding of overall cloud-infrastructure concepts is also important. Understanding the geographical location of the data centers hosting your cloud servers, for example, can be crucial. “You need to understand the topology,” Roch says. “It’s important, but I don’t know that enough companies pay attention to it.”

 

Specifically, it is important to be able to recognize potential single points of failure in cloud infrastructure. If your application is deployed in multiple AWS regions, for example, but storage for that application is in one location, for example, then that geography-based resiliency is not really there.

Another big consideration affected by data center location is latency. “Looking at distribution, caching, single points of failure and availability – these should be parts of that equation,” Roch says.

 

Hybrid as differentiation

For Farelogix, a company that provides technology services to airlines, a hybrid cloud was a way to make the company stand out among the competition.

 

The company provides airlines with technologies to support new ways of selling and packaging products. Much of its business is back-end components of airline websites that allow travel sites like Travelocity to hook into airline ticketing systems. Put simply, it is a technology shop that provides custom solutions for customers.

 

The space Farelogix operates in is getting quickly populated by competitors, Sue Carter, SVP of marketing, says. These competitors include the likes of Google. Cater says this means technology is of critical importance to the company’s future success.

 

Farelogix chose Verizon Terremark to provide its hybrid cloud infrastructure. Nadesha Ranasinghe, director of IT services at Farelogix, says its primary application stack is running in Verizon’s multi-tenant enterprise cloud, while its database environment is running on physical equipment on a Verizon data center floor.

 

Farelogix has about nine racks hosted at Verizon’s Miami NAP of the Americas data center and its cloud disaster-recovery site is in the provider’s Virginia facility. The infrastructure is set up in an active-active mode, which means all transactions run at both facilities. Farelogix has a pool of reserved cloud capacity with the option to burst.

 

For Ranasinghe, the ability to stay with the same provider for both dedicated and cloud infrastructure is a positive thing. Because the provider is the same, time-to-market for the new hybrid infrastructure was short. Since the company had been Verizon’s colocation customer in Miami, integration with the same provider’s cloud hosted at the other data center was relatively quick and painless.

 

Another benefit of using Verizon only is the direct link between its data centers. Had Farelogix chosen two different vendors, traffic between cloud and on-premise would have to go through the Internet at an intolerable latency, Ranasinghe explains. The connectivity would also cost more had the company gone to a third-party carrier for it.

 

Hybrid to the rescue

For Schlitterbahn, a New Braunfels, Texas-based operator of water parks, the decision to set up a hybrid infrastructure for the purpose of bursting into the cloud came as a result of a crisis.

 

In the summer of 2011, on the 4th July weekend (one of the company’s busiest times), its website crashed because it could not handle the volume of traffic that was coming through.

 

Pat Symchych, Schlitterbahn’s web developer, says that for the company, whose demand is seasonal, the ability to burst into cloud when it needs to has resulted in more resilient services and savings. When traffic drops, she spins down all the unnecessary cloud VMs and she doesn’t pay for them when they are not in use.

 

“We do save money too in the overall picture because I kick servers back off when my traffic drops,” she says.

 

Prior to the 2011 crash, the company’s website was hosted by a provider in a shared environment, Symchych says. Schlitterbahn had gone through a firm that built a Drupal website for them and they handled the hosting as well. When the demand spike caused the site to crash, Symchych put up a temporary website and hired a company to migrate the site to a hybrid cloud infrastructure by Rackspace.

 

In October, when we spoke with Symchych, Schlitterbahn had just closed its water parks because the season had ended. Its dedicated environment at a Rackspace data center was running on two servers. Next summer, when the season opens again, the infrastructure will again be temporarily scaled.

 

This article first appeared in the 32nd edition of DatacenterDynamics FOCUSmagazine. Follow the link for a free subscription.