Archived Content

The following content is from an older version of this website, and may not display correctly.

One of the biggest bottlenecks in the data center could soon be overcome if a recent invention showcased at the Massachusetts Institute of Technology (MIT) comes to fruition.

It could see latency slashed to a fraction leading to quicker throughput and faster communication between all points on the private cloud.

The invention, Fastpass, creates a new much more efficient way for servers in data centers to talk to each other. It could replace the decentralized protocol, the accepted method of initiating an exchange between two servers, with a new centralized protocol which bypasses many of the problems of its predecessor.

The use of centralized protocol in data centers is too unwieldy, say researchers, and leads to routers becoming overwhelmed as they seek to set up multiple connections between all the servers on which data and processing workloads are being shared. This is leading to a lag as requests get queued up to be processed by the routers in each data center.

The resulting traffic queue creates an average latency between servers in a data center of 3.56 microseconds, according to MIT researchers. Their Fastpass alternative, a centralized communications protocol, will cut that latency to 0.23 microseconds on average, its test figures claim.

If true, the latency between servers across data centers would become one fourteenth of today’s levels, and data center throughput could speed up fourteen fold.

The researchers found that by using this approach, the Fastpass system is able to handle a network transmitting at 2.2 terabits per second, with just eight cores. In experiments to be presented in August, the researchers will show that Fastpass cut the average queue length in a Facebook data center by 99.6%.

Analyst Clive Longbottom, senior researcher at Quocirca, said the wider implications will affect the customers of data centers more than the facilities themselves. The lower latency will not mean an exodus of data centres from expensive locations such as London, he said.

“Average WAN latency is still in the hundreds of milliseconds, so cutting in-facility latency from 3 to 0.3 milliseconds will lop about 3% off end to end results, so that isn't the issue,” said Longbottom.

However, if all the business logic and data is held in the same place, and with flash-based storage late cues now being in the nanoseconds, anything that can keep bottlenecks to the lowest possible level is good for the clients, he said.

“Number crunching and big data analytics can all be done in-facility at near-real time,” said Longbottom, “all that is then required is to minimise chatter between the facility and the access device through using screen presentation only.”