Archived Content

The following content is from an older version of this website, and may not display correctly.

Google has deployed Andromeda, the software defined networking (SDN) technology it developed for its own use, across two availability zones of Compute Engine, the company's public cloud infrastructure service, which enables users to rent virtual compute and storage capacity in Google data centers.

 

In a blog post announcing the decision, Amin Vahdat, distinguished engineer at Google, said customers in Compute Engine zones us-central1-b and europe-west1-a would see much better throughput performance. The company is planning to migrate all zones to Andromeda over the next several months.

 

Andromeda is Google's substrate for network virtualization. An Andromeda controller is the orchestration point for provisioning, configuring and managing virtual networks and in-network packet processing.

 

It orchestrates across virtual machines, hypervisors, operating systems, network interface cards, top-of-rack switches, fabric switches, border routers and network peering edge.

 

Google's high network performance comes from its ability to control every single component throughout the network stack. “Rather than being forced to create compromised solutions based on available insertion points, we can design end-to-end secure and performant solutions by coordinating across the stack,” Vahdat wrote.

 

Like it does most of its software and hardware, Google designed its own SDN solution. Acceptance of SDN as a concept is growing rapidly throughout the data center networking world, where the flexibility of compute capacity brought along by server virtualization has made network management complicated and difficult.

 

Many networking vendors have chosen SDN as a way to solve this problem, building solutions that automate network management. Different approaches to SDN have emerged, most popular of which is to create a virtual network overlay on top of a physical network infrastructure, where an SDN controller dictates configuration requirements to the hardware underneath using protocols such as OpenFlow, the most widely used open SDN protocol.

 

Cisco, the world's largest vendor of networking equipment, has been promoting its own way to do SDN, complete with its own open protocol. The company announced the protocol, called OpFlex, earlier this week.

 

Instead of communicating detailed configuration information to each piece of equipment underneath, Cisco's Application Policy Infrastructure Controller (APIC) sends high-level application policy data to the infrastructure, leaving the specific configuration details to the hardware itself.

 

The equipment south of APIC has to understand OpFlex, of course, and be able to self-configure.