Cookie policy: This site uses cookies (small files stored on your computer) to simplify and improve your experience of this website. Cookies are small text files stored on the device you are using to access this website. For more information on how we use and manage cookies please take a look at our privacy and cookie policies. Some parts of the site may not work properly if you choose not to accept cookies.

sections

Cracking latency in the Cloud

  • Print
  • Share
  • Comment
  • Save

Latency has always been a challenge when developing applications and developers and networking engineers who work in the Cloud now have to understand a whole new set of latency delays. Here are some of the common issues and some advice on how they can be overcome.

The cost of latency

To understand latency, we need to understand the costs of it and how it impacts business.

There have been some studies that have examined overall website performance with respect to latency. For example, every drop of 20ms of network latency will result in a 7-15% decrease in page load times.

This study isn’t just an academic exercise: both Amazon and Google found big drops in sales and traffic when pages took longer to load. A half-second delay will cause a 20% drop in Google’s traffic, and a tenth of a second delay can cause a drop in 1% of Amazon’s sales. It’s not just individual sites that have an interest in speeding up the web.

While reducing latency is desirable, not every app requires the lowest latencies. Certainly, we have become more demanding of our internet performance as we distribute our applications throughout various cloud-based providers. Applications such as algorithmic or high-frequency trading, video streaming, more complex web and database services and 3D engineering modelling are in this category. But applications such as email, analytics and some kinds of document management aren’t as demanding.

Possible pitfalls

In the past, latency has had three different measures: roundtrip time (RTT), jitter and endpoint computational speed. Adding traceroutes as a tool, each of these is important to understanding the true effect of latency, and only after understanding each of these metrics can you get the full picture.

RTT measures the time it takes one packet to transit a network from source to destination and back again, or the time it takes for an initial server connection. This is useful in interactive applications, and also in examining app-to-app situations, such as measuring the way a web server and a database server interact and exchange data.

Jitter is a variation in packet transit delay caused by queuing; contention and serialization effects on the path through the network. This can have a large impact on interactive applications such as video or voice.

The speed of the computers at the core of the application is also crucial, as their configuration will determine how quickly they can process the data. While this seems simple, it can be difficult to calculate once we start using cloud-based computer servers.

Finally traceroute is the name of a popular command that examines the individual hops or network routers that a packet takes to go from one place to another. Each hop can also introduce more or less latency. The path with the fewest and quickest hops may or may not correspond to what we would commonly think of as geographically the shortest link. For example, the lowest latency and fastest path between a computer in Singapore and one in Sydney Australia might go through San Francisco.

Virtual complexities

Today’s modern data center isn’t just a bunch of rack-mounted servers but a complex web of hypervisors running dozens of virtual machines. This introduces yet another layer of complexity, since the virtualized network infrastructure can introduce its own series of packet delays before any data even leaves the rack itself.

But we also have to add to this the delays in running virtualized desktops, if that is relevant to our situation. Many corporations have begun deploying these situations and that introduces yet another source of latency. If not designed properly, you can experience  significant delays with just logging into the network, let alone running your applications on these desktops.

Relying on the Internet for application connectivity in the cloud introduces a degree of variability and uncertainty around bandwidth, speed and latency. This can be unacceptable to many large and medium-sized enterprises, which are increasingly putting the emphasis on end-to-end quality of service management. Using dedicated connectivity to cloud providers overcomes this and hooking up via carrier neutral data centers and connectivity providers can also have benefits in terms of costs, monitoring, troubleshooting and support.

Latency in itself does not have to be an issue; it’s the unpredictability of latency that really causes the problems. Although there are several complicating factors, careful planning can ensure that these pitfalls are avoided and applications run smoothly in the Cloud.

 

Related images

  • network file shot

Have your say

Please view our terms and conditions before submitting your comment.

required
required
required
required
required
  • Print
  • Share
  • Comment
  • Save

Webinars

  • 5 Reasons Why DCIM Has Failed

    Wed, 15 Jul 2015 09:00:00

    Historically, DCIM systems have over-promised and under-delivered. Vendors have supplied complex and costly solutions which fail to address real business drivers and goals. Yet the rewards can be vast and go well beyond better-informed decision-making, to facilitate continuous improvement and cost savings across the infrastructure. How can vendors, customers and the industry as a whole take a better approach? Find out on our webinar on Wednesday 15 July.

  • Is Your Data Center Network Adapting To Constant Change? (APAC)

    Wed, 24 Jun 2015 05:00:00

    Over the next three years, global IP data center traffic is forecast to grow 23 percent—and 75 percent of that growth is expected to be internal*. In a constantly changing environment and as planners seek to control costs by maximizing floor space, choosing the right cabling architectures is now critical. Is your structured cabling system ready to meet the challenge? Join Anixter's Technical Services Director, Andrew Flint and DatacenterDynamics CTO Stephen Worn and Jonathan Jew, Editor ASI as they discuss how to: •Create network stability and flexibility •Future-ready cabling topology •Make the right media selection •Anticipate and plan for density demands Essential viewing for data center planners and operators everywhere – Register Now! Please note that these presentations will only be delivered in English. 1.EMEA: Tuesday 23 June, 3 p.m BST 2.Americas: Tuesday 23 June, 1 p.m CST 3.APAC: Wednesday 24 June, 1 p.m SGT APAC customers – please note the equivalent country times: India: 10:30am; Indonesia, Thailand: 12 noon; Singapore, Malaysia, Philippines, China, Taiwan, Hong Kong: 1pm; Australia (Sydney): 3pm ; New Zealand: 5pm.

  • Is Your Data Center Network Adapting To Constant Change? (Americas)

    Tue, 23 Jun 2015 18:00:00

    Over the next three years, global IP data center traffic is forecast to grow 23 percent—and 75 percent of that growth is expected to be internal. In a constantly changing environment and as planners seek to control costs by maximizing floor space, choosing the right cabling architectures is now critical. Is your structured cabling system ready to meet the challenge? Join Anixter's Technical Services Director, Andrew Flint and DatacenterDynamics CTO Stephen Worn and Jonathan Jew, Editor ASI as they discuss how to: - Create network flexibility - Future-ready cabling technology - Make the right media selection - Anticipate and plan for density demands Essential viewing for data center planners and operators everywhere - Register Now! Please note that these presentations will only be delivered in English. 1. EMEA: Tuesday 23 June, 3 p.m BST 2. Americas: Tuesday 23 June, 1 p.m CST 3. APAC: Wednesday 24 June, 1 p.m SGT APAC customers – please note the equivalent country times: India: 10:30am; Indonesia, Thailand: 12 noon; Singapore, Malaysia, Philippines, China, Taiwan, Hong Kong: 1pm; Australia (Sydney): 3pm ; New Zealand: 5pm.

  • Is Your Data Center Network Adapting To Constant Change? (EMEA)

    Tue, 23 Jun 2015 14:00:00

    Over the next three years, global IP data center traffic is forecast to grow 23 percent – and 75 percent of that growth is expected to be internal. In a constantly changing environment and as planners seek to control costs by maximizing floor space, choosing the right cabling architectures is now critical. Is your structured cabling system ready to meet the challenge? Join Anixter's Technical Services Director, Andrew Flint and DatacenterDynamics CTO Stephen Worn and Jonathan Jew, Editor ASI as they discuss how to: • Create network stability and flexibility • Future-ready cabling topology • Make the right media selection • Anticipate and plan for density demands Essential viewing for data center planners and operators everywhere – Register Now! 1. EMEA: Tuesday 23 June, 3 p.m BST 2. Americas: Tuesday 23 June, 1 p.m CST 3. APAC: Wednesday 24 June, 1 p.m SGT APAC customers – please note the equivalent country times: India: 10:30am; Indonesia, Thailand: 12 noon; Singapore, Malaysia, Philippines, China, Taiwan, Hong Kong: 1pm; Australia (Sydney): 3pm ; New Zealand: 5pm.

  • Do Industry Standards Hold Back Data Centre Innovation?

    Thu, 11 Jun 2015 14:00:00

    Upgrading legacy data centres to handle ever-increasing social media, mobile, big data and Cloud workloads requires significant investment. Yet over 70% of managers are being asked to deliver future-ready infrastructure with reduced budgets. But what if you could square the circle: optimise your centre’s design beyond industry standards by incorporating the latest innovations, while achieving a significant increase in efficiency and still maintaining the required availability?

More link