Despite the rapid adoption of cloud technologies, there’s still a long-standing deficit in the ability to reliably distribute workloads across multiple clouds, multiple data centers and hybrid infrastructures. The result is poorly distributed workloads and degraded application performance that could be avoided if workloads were better managed at a global level. What’s needed is better global server load balancing, or GSLB.

Weightlifter / balancing the load
Balancing the load – Pixabay / Pexels

Balancing the load in the cloud

Because the need to intelligently distribute workloads is fundamental, load balancers—also referred to as application delivery controllers or ADCs—are widely deployed in data enters. Their function is to distribute workloads to backend servers, thereby ensuring optimum use of aggregate server capacity and better application performance.

The traditional load balancer market features providers like Citrix, Radware, F5 and Kemp Technologies. Their hardware ADCs have been the go-to solutions for infrastructure and operations teams for quite some time. Recently, software-based ADCs from these vendors and software-only solutions such as HAProxy, Nginx and Amazon ELB have emerged as enterprises have moved applications to the cloud.

There are two main paths that organizations can take to arrive at multi-data center, multi-cloud GSLB. One is to use a traditional managed DNS provider for basic traffic management. The advantage of this is it’s easy to implement, low-cost and reliable, requiring no capital outlay. Unfortunately, it offers only minimal traffic management capabilities such as round-robin DNS and geo-routing. This approach doesn’t prevent maldistribution of workloads because it uses fixed, static rules rather than basing traffic routing on the actual, real-time workloads and capacity at each data center. For example, geo-routing can only ensure that users (or at least their workloads) get sent to the geographically closest data center. It cannot account for uneven distribution of users geographically, localized demand spikes or server outages within a data centre.

 Global load balancing no longer needs to be a choice between the lesser of two evils.

To address these limitations, many ADC vendors offer their own purpose-built DNS appliances that have a tighter integration with their load balancers. These can make traffic management decisions based on actual use levels at each data center by receiving real-time load and capacity information from the local load balancers.

Though this approach delivers better workload distribution,, it comes with some very significant trade-offs:

  1. Most enterprises are not staffed with the specialized skills to operate the mission-critical service of DNS correctly with 100 percent availability.
  2. The DNS appliances come with a hefty price tag. And because they must be widely deployed, redundantly configured and defended from attack, the solution overall results in both high capital cost and high operational expenditure.
  3. A DNS hosted at a single data center does not provide the performance to meet the needs of a global user base, but the cost and complexity of deploying a globally ubiquitous DNS are prohibitive for most enterprises.
  4. DDoS attacks are difficult to mitigate and widespread. It becomes a single point of failure for the enterprise’s internet-facing services. The need to deploy and defend the DNS becomes an added operational and cost burden.

In light of these negatives, most organizations that have deployed data center load balancers are not using the GSLB functions available from their load balance vendor. Those that have deployed GSLB functions are open to replacing them with a better solution. A superior approach is a cloud-based, managed GSLB solution that uses real-time telemetry from load balancers to make intelligent traffic management decisions.

What GSLB as a service looks like

A cloud-based, managed service is the best delivery option for GSLB. The core attributes and advantages of such an approach are as follows:

  1. Real-time capabilities
    Basic managed DNS, as mentioned above, does not offer very good traffic management but is very attractive from the perspective of being globally available, performant and well managed. A cloud-based GSLB solution needs to retain those attributes while at the same time offering the true, real-time GSLB capabilities.
  2. Pre-emptive shifts
    More than just direct workloads away from points of presence that are overloaded, an effective GSLB solution should prevent overload conditions from happening in the first place. This requires the capability to detect the onset of overload conditions and shift traffic appropriately, whether those conditions are due to demand spikes, loss of capacity or both.
  3. Accommodating the hybrid cloud
    Hybrid architecture is most popular among companies currently using cloud architecture. Because enterprises deploying hybrid infrastructures often use a mix of ADC types (both commercial and open source), the GSLB solution needs an open interface for collecting real-time data from disparate ADC types.
  4. Reduced costs
    Because there is no need to purchase hardware or software appliances, cloud-based GSLB as a service reduces capital expenditure. At the same time, a managed GSLB solution also decreases operational expenditures by requiring substantially less maintenance.

The best of both worlds

Global load balancing no longer needs to be a choice between the lesser of two evils. Organizations now can combine advanced traffic management capabilities that once were only available with proprietary ADC solutions with a globally performant, reliable managed DNS service. By proactively preventing maldistribution of application workloads, this delivers a more consistent end user experience and improved application performance.

Jonathan Lewis is vice president of product at NS1