In the race to shave microseconds and nanoseconds from transactions, high-frequency traders and other developers creating applications that require low latency are ramping up technology to get closer to the speed of light. Latency is a term used by networking experts and high-frequency traders to define delays that occur in transmitting data, and is critical to the success of high-frequency trading platforms, enterprises and scientific applications.
Over the past few years, technologists have reduced latency through a variety of improvements. These include: CPU cycles and speeds in the computing device; input-output speed of the devices; bandwidth of the network; speed of the operating system functions; and computation speed of the software. Another key component is the latency induced by the fiber optic network over which data travels.
The fundamental constraint in a fiber network is the maximum speed of light, which is 300,000 km per second. No matter how fast computers and software generate trades, any network transmitting data introduces latency due to the fiber-optic media over which it travels. It is the law of physics that light propagation on a fiber-optic network introduces a constant latency of approximately five microseconds per kilometer (0.6 mi). While science cannot be changed, there are techniques in optical network design and component selection that can minimize latency through other portions of the optical network.
Achieving low latency optical transport
Source: Thinkstock / kynny
In applications that require low latency, the use of latency-optimized optical systems will minimize the latency introduced in the transport network. The development of the technology for optical components and systems is evolving quickly and improvements in latency can be achieved with the latest solutions. There are two key factors to consider when creating low latency optical networks – 1) network design and 2) low latency component systems.
One straight forward way to minimize latency is to have the shortest fiber connection from the computing platform to the user. This factor is determined by available real estate and fiber accessibility. Once data center locations and fiber routes are known, then the factors listed below have the most significant impact on latency.
Most fiber networks are built using dense wave division multiplexing (DWDM). These DWDM wavelengths make efficient use of fiber by creating virtual pipes within it. Today, the data rates for these wavelengths are typically 10Gbps. However, 100Gbps interfaces are being deployed now and, starting this year, will be the primary new interface type installed within and between data centers. For cutting edge installations, 200Gbps interfaces will also become available by the end of this year. Building a network with the fastest data rate available is the first step to a low latency optical network. However, not all 100G components have the same latency. When moving to 100G it is important to understand the latency from the Transmit side of the link to the Receive side of the link. This measurement will show which 100G dispersion compensation components have the lowest latency.
Dispersion compensation fiber or modules
Chromatic dispersion is a broadening of the input signal as it travels down the length of the fiber. The farther the signal travels, the more dispersion is introduced. A number of competing dispersion compensating technologies exist for use on the majority of fiber installed around the world. The best dispersion compensating components introduce a negligible 0.15 nanoseconds of latency per kilometer.
Passive optical MUX/DMUX
Optical multiplexing enables the addition or extraction of individual wavelengths so specific locations are served where bandwidth is needed. There are many vendors of MUX/DEMUX equipment and a number of technologies used to add and drop signals. These devices are known generically as Passive Optical MUX/DMUX and selecting the MUX/DEMUX solutions with the lowest latency is another factor in creating a latency efficient fiber optic network.
100G and other transponders
100G ports will soon be the preferred data rate for most new optical transport links between data centers. There are a number of 100G technologies coming to market. Some offer longer reach, some offer lower cost, and some are optimized for latency. Direct detect transparent transponders can deliver less than five nanosecond of latency for latency sensitive applications.
Minimizing latency on optical transport links between data centers depends on three factors – 1), minimizing the fiber link distance between locations; 2) using low latency components and modules; and 3) designing the overall optical transport network to minimize latency. Using these guidelines, anyone can minimize latency on their fiber links.
Mannix O’Connor is the technical marketing director at MRV Communications, a global supplier of packet and optical solutions that power some of the world’s largest networks.