The hybrid cloud market is growing rapidly and by 2023, 40 percent of IT infrastructure and operations leaders will implement a hybrid cloud architecture (Gartner IOCS, December 2020). What’s behind this growth?

Hybrid cloud environments are a fusion of private and public cloud services, with orchestration between both platforms. They enable enterprises to easily and cost-effectively respond to fluctuations in IT workloads by elastically expanding and shrinking the resources as needed.

With the clear benefits of a hybrid cloud strategy, IT teams can rush to implement one. This article pinpoints three mistakes organizations may make and provides tips on how to avoid them, in order to ensure a smooth transition from your current infrastructure to a hybrid cloud environment.

1. Not monitoring cloud spending

In a somewhat predictable development, public cloud providers have not invested heavily in functionalities for tracking resource usage and detecting wasteful resource allocations. This can create an overwhelming resource sprawl issue for customers because consuming cloud resources is almost effortless.

With the self-service capabilities of the public cloud, within minutes a DevOps engineer can spin up dozens of test servers, launch hundreds of containers and consume terabytes of storage. This is not a concern if those resources are deleted one hour later, but if the test environment is forgotten amid other urgent tasks, your company is liable to carry on paying for years.

Confronted with hundreds of uncatalogued S3 buckets, snapshots and volumes, finding which resources are necessary can be a strenuous task for IT managers. For example, if they start with servers that are too large and don’t have proper cloud resource monitoring, it would be risky to replace them with smaller ones. So, rather ironically, cloud projects aimed at increasing elasticity and reducing costs by only paying for the utilized capacity often turn into a nightmare of wasted cloud resources.

In order to avoid cloud sprawl, IT teams must scrupulously track and monitor their cloud resources from day one. Usage needs to be defined clearly in cloud policies, resources must be tagged, regular mandatory review cycles for all resources should be established, and the best monitoring tools within budget should be purchased.

2. Expecting existing applications to work well with network latency

The performance of cloud-based data centers and on-prem data centers is not the same. By moving to a hybrid model that places certain resources in the public cloud, you are adding latency into the mix. Many applications designed to work over a LAN will operate poorly if you relocate them to a cloud environment that’s accessible only by WAN.

Complications arise with storage services in this regard. When you move storage to the public cloud and leave some of the storage clients on-prem, there may be user complaints about sluggish performance despite the presence of high-bandwidth network links.

Consider the case of a simple script that deletes all the files in a folder, with 10,000 files in it. Over the LAN, with no latency, this script takes about 1 millisecond per file, totalling 10 seconds to delete all the files. But once you move the file server to a cloud data center located across the country, you have added 80 milliseconds of latency per transaction. As a result, each deletion takes 81 milliseconds, and now the same process takes 13.5 minutes instead of 10 seconds (81 times slower).

While this may sound counter-intuitive, it is a very real situation and the speed of your expensive network link does not help. It is the reason storage workloads tend to suffer, especially when dealing with small files and metadata-intensive jobs.

To avoid such storage latency issues with your legacy applications, the most cost-effective way is to utilise caching devices such as edge filers that serve as a hybrid cloud storage enabler, keeping an intelligent on-prem cache for data stored in the public cloud. Edge filers enable workloads to operate efficiently with cloud storage without requiring modification.

3. Becoming locked into a single cloud provider

Companies deploying a hybrid cloud often unwittingly take steps that lock them into a single cloud provider, most often Amazon Web Services or Microsoft Azure. While it may be easier at first to put all your eggs in one basket, this can prove to be a very costly mistake over the long term.

You are at the cloud provider’s mercy if you decide to stick with one provider. If something goes wrong or your provider decides to raise its prices, you will have no choice but to agree. Moreover, cloud vendors are especially notorious for offering low or even zero pricing for data storage, while charging exorbitant fees for getting your data back if you decide to jump ship later.

There is no need to fall into this trap. You may not require two cloud providers from day one – but think ahead. One way to avoid vendor lock-in is to use hybrid cloud-agnostic technologies. For compute functions, Kubernetes is becoming a de facto standard, allowing good portability across clouds. When it comes to storage, look for solutions that are not tied to a single cloud vendor, and that offer support for cloud migration and multi-cloud from day one.

The transition from on-prem to hybrid cloud is not always simple, but the business benefits make the obstacles well worth overcoming. By monitoring cloud resource consumption, deploying caching devices to overcome latency and not relying on a single cloud vendor, your organization can steer clear of common pitfalls and make your hybrid cloud implementation a success.