Archived Content

The following content is from an older version of this website, and may not display correctly.

Latency has always been a challenge when developing applications and developers and networking engineers who work in the Cloud now have to understand a whole new set of latency delays. Here are some of the common issues and some advice on how they can be overcome.

The cost of latency

To understand latency, we need to understand the costs of it and how it impacts business.

There have been some studies that have examined overall website performance with respect to latency. For example, every drop of 20ms of network latency will result in a 7-15% decrease in page load times.

This study isn’t just an academic exercise: both Amazon and Google found big drops in sales and traffic when pages took longer to load. A half-second delay will cause a 20% drop in Google’s traffic, and a tenth of a second delay can cause a drop in 1% of Amazon’s sales. It’s not just individual sites that have an interest in speeding up the web.

While reducing latency is desirable, not every app requires the lowest latencies. Certainly, we have become more demanding of our internet performance as we distribute our applications throughout various cloud-based providers. Applications such as algorithmic or high-frequency trading, video streaming, more complex web and database services and 3D engineering modelling are in this category. But applications such as email, analytics and some kinds of document management aren’t as demanding.

Possible pitfalls

In the past, latency has had three different measures: roundtrip time (RTT), jitter and endpoint computational speed. Adding traceroutes as a tool, each of these is important to understanding the true effect of latency, and only after understanding each of these metrics can you get the full picture.

RTT measures the time it takes one packet to transit a network from source to destination and back again, or the time it takes for an initial server connection. This is useful in interactive applications, and also in examining app-to-app situations, such as measuring the way a web server and a database server interact and exchange data.

Jitter is a variation in packet transit delay caused by queuing; contention and serialization effects on the path through the network. This can have a large impact on interactive applications such as video or voice.

The speed of the computers at the core of the application is also crucial, as their configuration will determine how quickly they can process the data. While this seems simple, it can be difficult to calculate once we start using cloud-based computer servers.

Finally traceroute is the name of a popular command that examines the individual hops or network routers that a packet takes to go from one place to another. Each hop can also introduce more or less latency. The path with the fewest and quickest hops may or may not correspond to what we would commonly think of as geographically the shortest link. For example, the lowest latency and fastest path between a computer in Singapore and one in Sydney Australia might go through San Francisco.

Virtual complexities

Today’s modern data center isn’t just a bunch of rack-mounted servers but a complex web of hypervisors running dozens of virtual machines. This introduces yet another layer of complexity, since the virtualized network infrastructure can introduce its own series of packet delays before any data even leaves the rack itself.

But we also have to add to this the delays in running virtualized desktops, if that is relevant to our situation. Many corporations have begun deploying these situations and that introduces yet another source of latency. If not designed properly, you can experience  significant delays with just logging into the network, let alone running your applications on these desktops.

Relying on the Internet for application connectivity in the cloud introduces a degree of variability and uncertainty around bandwidth, speed and latency. This can be unacceptable to many large and medium-sized enterprises, which are increasingly putting the emphasis on end-to-end quality of service management. Using dedicated connectivity to cloud providers overcomes this and hooking up via carrier neutral data centers and connectivity providers can also have benefits in terms of costs, monitoring, troubleshooting and support.

Latency in itself does not have to be an issue; it’s the unpredictability of latency that really causes the problems. Although there are several complicating factors, careful planning can ensure that these pitfalls are avoided and applications run smoothly in the Cloud.