Google talks about its network at conference, while AT&T releases SDN specs to Open Compute
AT&T is sharing the technology that has cut the cost of its gigabit networking service while, at the same network conference, Google has revealed a little information about its own redesign of data center networking.
AT&T uses software to replace expensive optical network hardware with commodity devices to deliver its Gigapower networking service to businesses and homes. The technology it uses will be released as open source, and shared through the Open Compute Foundation, the company told the Open Networking Summit in Silicon Valley this week. At the same event, Google’s chief network architect gave a little detail about the Jupiter network switches it designed for its data centers.
Google Jupiter Superblock
AT&T: sharing is caring
The US network giant is using network functions virtulaization (NFV), to implement network kit in software, creating virtual line cards which can be deployed on a software defined network (SDN). It’s starting with optical line terminals (OLTs) for gigabit passive optical networks (GPON).
But it’s going further and disaggregating the functions. “Disaggregation is a big deal,” said AT&T vice president John Donovan in a blog post coinciding with the conference. “It means we don’t just clone a hardware device completely in software and continue running it as before. Instead, we break out the different subsystems in each device. We then optimize each of those subsystems. We upgrade some and discard others.”
The operator is doing the same thing with its broadband network gateway and Ethernet aggregation switch, and will be the specifications for these functions as open source, through the Open Compute Foundation, Donovan said.
This means that white box makers can bundle the functions and sell kit to AT&T or other operators, he continued.
It’s part of a bigger open source effort, which includes the release of a project which re-architects “central offices” (aka telephone exchanges, called CORD or central office rearchitected as a data center.
At the same conference, Google’s network lead Amin Vahdat talked about the custom-designed networking technology which operates inside Google data centers.
These Google-designed switches have long been a closely-guarded secret, unlike the equipment used in Facebook’s data centers, which is shared through the Open Compute Foundation.
Vahdat’s related blog post makes it pretty clear that Google is still keeping the details private, dropping only tiny crumbs of information.
For the last ten years, Google has been designing its own network switches and protocols. It started with a system called Firehose, and the blog shows the current Jupiter switches, which allow it to deliver 1 Petabit/s of total “bisection bandwidth” (that is, the overall capacity from one half of the network to the other) This would allow a network of 100,000 servers to exchange 10Gb/s each, says Vahdat, or “enough to read the entire scanned contents of the Library of Congress in less than 1/10th of a second”.
Google effectively developed SDN independently, Vahdat says, and has a few key principles of network design. It uses a “clos” topology, which uses a bunch of cheap switches to replace larger more expensive ones. It keeps the software control stack centralized to manage thousands of switches at one go, and it builds its own software and hardware - and even its own custom network protocols - and implements this on silicon from vendors.
Elsewhere in networking, Google has been a bit more open, sharing the optical networking configuration used in the wide area networks between its data centers, but only releasing it to a select group of vendors and operators called OpenConfig.