The biggest challenge facing large data centers today is how to scale the speed of the optical fabric between switches and routers from 10Gbps to 100Gbps. Of course the network interconnect has always been a bottleneck. But in earlier generations, the migration path from one port speed to another was generally straightforward: increase the speed by a factor of 10, and define a short distance protocol for copper wire and a longer distance protocol for optical fibers. Each new generation brought lower power, smaller size and lower cost.
The transition from 10Gbps to 100Gbps has been far more challenging. It’s been more than two years since the IEEE standardized 100Gbps Ethernet LAN communications, but the number of deployed 100Gbps transceivers remains minuscule for three reasons:
First-generation C Form Factor Pluggable (CFP – a standardized pluggable transceiver that supports 100Gbps Ethernet) packages are so big that only four of them fit in the front panel of a standard rack-mounted switch. This provides only four 100Gbps pipes with only 400Gbps throughput for the switch. By contrast, a rack-mounted switch can support 48 10Gbps SFP ports, or 480Gbps of switch throughput. Therefore density, using 100Gbps CFP products, has become worse.
Power is a problem. CFP packages consume 20-24 watts, compared with about 10 watts for 10 10Gbps SFP packages. So the power per bit has more than doubled.
Dramatic cost increases have stifled adoption. Data center customers, expecting a decline in terms of dollars per bit, are discovering that 100G solutions cost five even 10 times more than 10G solutions. No wonder this transition has not been smooth.
At the same time, the server blades, which are increasing in performance every year, are in the middle of a transition from 1G ports to 10G ports. In 2011, roughly 50% of the server blades either had 10G adapter cards or 10G LAN on Motherboard (LOM).
By 2013, virtually 100% of server blades will support 10G. Switches that support 10G channels to the server blades need 100Gbps for interconnect with other switches and routers. So they must scale to 100G ports, even though the size, power and cost have all gotten worse.
Silicon photonics chips
One option for 100Gbps transceivers is to use silicon photonics chips inside the optical transceiver. Because silicon photonics chips are fabricated using the same complimentary metal-oxide semiconductor (CMOS) wafers as electronics chips, they are low cost.
Silicon photonics chips are processed using mask layers in the same foundry as electronics wafers. Just like traditional wafers, the silicon photonics wafers are diced into chips and packaged. Optical chips fabricated in this manner can be just as inexpensive as their electrical cousins. When mass volumes are needed, the wafer fab simply runs more wafers of the same recipe.
Silicon photonics eliminates the need, and the expense, for hand assembly of hundreds of piece parts. Silicon photonics chips are much, much smaller than the optical subassemblies they replace. A silicon photonics chip can support 100Gb/s transmission on a chip less than half the size of a postage stamp. This lowers the cost, lowers power consumption and improves density.
Other optical solutions assembled from discrete components are packaged in expensive, hermetically sealed packages. With these solutions, even a speck of dust between any of the components can inhibit the light path and render the product useless. By contrast, silicon photonics devices are self-contained within the layers of the chip. With no need for hermiticity, they can reuse low-cost industry standard electronics packaging.
A huge advantage of optical communication is the ability to provide parallel channels over the same optical fiber using different wavelengths of light. This technique is called wave division multiplexing (WDM) and there is no equivalent in the electrical domain. With WDM, four, eight or even 40 channels of light, each at a different frequency, can use a single strand of optical fiber. Fiber is cheap, especially when a single strand is replacing so many copper cables. Therefore, for large pipes, optical interconnect is far less expensive than copper cabling.
To combine the channels, WDM requires that the silicon photonics chip contain a grating, or WDM multiplexer. In the 100Gbps case, four lasers, each transmitting at 25Gbps and each using a different wavelength of light, are modulated using on-chip modulators, then the signals are combined using a grating.
A single output fiber containing an entire 100Gbps pipe is connected to the chip. On the receiver side, one input fiber (also carrying four wavelengths of light and 100Gbps) is demultiplexed and each of the four channels of light is converted back to 25G electrical signals.
Reduce power and increase scale
Silicon photonics solutions are also low power. For example, the first generation 100Gbps CFP transceivers consume 20-24 watts, but a silicon photonics solution can be less than 3.5 watts. Not only is this far less than 100Gbps solutions, silicon is also 70% less than 10Gbps solutions. This means the power budget for four CFP transceivers could support 24 silicon photonics transceivers.
Power is not the only reason why rack-mounted switches can support only four 100Gbps CFP packages. It’s also the size. The CFP is larger than an iPhone, which is huge by optical standards. Solutions based on silicon photonics are so small they can fit in a QSFP package, which is the size of a flash drive.
One of the best things about silicon photonics solutions is the ability to scale from 100Gbps to 400Gbps and to 1.6Tbps. The 400Gbps configuration is accomplished by either 16 WDM lanes, each operating at 25Gbps, or eight WDM lanes operating at 50Gbps. The key optical components, the high-speed modulators and high-speed detectors, are already capable of 50Gbps operation and could support either approach. Scaling to 1.6Tbps will likely be accomplished by 32 WDM channels, each operating at 50Gbps. So when the high-speed data center fabric is ready to move to 400G and 1.6Tbps, silicon photonics solutions will be ready as well.
Entering the market
A few companies are starting to offer 40Gbps active optical cables (AOC) for the server-TOR switch link. AOCs have interfaces that plug and play with electrical cables and use silicon photonics to convert to optics inside the package. At 40Gbps, AOCs consume less power and support longer reaches than electrical cables.
In the not too distant future, server blades will scale to 100Gbps ports, overflowing the network traffic yet again. In fact, Intel recently announced that future Xeon server chips will integrate the fabric controller on-chip that supports bandwidths of more than 100Gbps. This means that future server blades with links of 100Gbps are coming. Small, low-power silicon photonics chips are the best solution for this application.
With silicon photonics, the capability to bring faster, smaller interconnects, which consume less power, offers the entire semiconductor industry a new world of opportunities – but we are still in the early stages. As we look to the future, next-generation data centers, high-performance computers and, eventually, consumer video products will all benefit from optical interconnects built from silicon photonics.
This article first appeared in FOCUS magazine issue 26. To view the digital edition click here.