Data center operators and cloud services provider are notoriously secretive about their trade, especially about failures or areas where they think they have a competitive edge. But Amazon Web Services (AWS) goes beyond this to operate in a veil of almost complete secrecy.

Facebook founded the Open Compute Project, and has been joined by Microsoft, Rackspace and Apple. Even Google has been happy to talk about its efforts to build more energy efficient data centers

aws cloud hiding tall
– DCd

AWS glimpses

AWS doesn’t share like that – but it sometimes opens up a bit. In 2014, AWS offered glimpses of custom servers and even custom Intel processors, and in April 2015 it finally revealed the revenue numbers for its cloud computing service.custom Intel processors,

Pictures from inside Amazon data centers are rare; however, during a talk at the 2015 AWS re:Invent conference in Las Vegas, Jerry Hunter, vice president of infrastructure at AWS, included images of a Google rack in his presentation (see box). Hunter said that sheer scale lets AWS build radically different data centers – and then leverage them to the hilt.

Gartner analysts estimate the AWS infrastructure-as-a-service cloud is 10 times larger than the next 14 competitors combined. “From building our own networks, to building our own servers, to building our own data centers, and from the largest facilities to the smallest appliances and networking gear – this [scale] allows us to simplify the design,” said Hunter in his session, explaining that everything is ultimately delivered at reduced cost compared with off-the-shelf hardware.

Part of this obsession with building its own specialized hardware stems from the fact that commercially available hardware simply doesn’t meet the needs of AWS. As an example, old-world networking gear is often encumbered by a generic design that is sub-optimized for AWS work and burdened with complex features it does not need. Without offering details, Hunter said each AWS data center operates on a purpose-built network that offers traffic engineering and other specific optimizations that it needs.

In other ways, these data centers are reassuringly familiar. “Each data center has two fiber paths coming out of it,” said Hunter. Existing dark fiber is used where available, though AWS will also entrench its own fiber as necessary.

Inter-zone connectivity

Ample fiber connectivity between data centers is necessary, based on recent clarifications from AWS that each Availability Zone (AZ) is located in a separate data center at a minimum – and could span over up to six data centers. The inter-AZ network has a peak capacity of more than 25 Terabits, or 25,000Gbps.

Hunter also revealed that AWS started to build its own internet backbones in 2013 for direct control, better quality of service and reduced cost. And AWS is not against using the public internet either. “If it turns out that the fastest and most effective route is the internet, then we will go through the internet,” said Hunter. “This improves performance and reduces jitter.”

It appears that no data center or cloud is an island, even one as large as AWS. It still has to rely on suppliers to get its servers and networking equipment manufactured and shipped to its many data centers – and here the sheer size of the AWS cloud could have potentially worked against it.

If it turns out that the fastest and most effective route is the internet, then we will go through the internet. Thisimproves performance and reduces jitter

Jerry Hunter, AWS

In 2011, flooding in Thailand affected millions. AWS was hit because most the world’s hard disk drives (HDDs) are assembled in Thailand, and some critical parts are made there. “Our wake-up call came in the Thailand flood of 2011,” said Hunter. “When we did this tour of the HDD manufacturers, there was a part that was made by a single vendor.”

And it turned out that getting the parts that AWS requires is hard without a direct relationship with suppliers, which showed the company the importance of its supply chain. Since then, AWS has worked hard to build up its supply chain relationships for getting its own servers, storage and networking gear.

To get the hardware back to the various AWS data centers efficiently, AWS was able to use the delivery expertise of its parent, Amazon, establishing mechanisms to feed new hardware back to its ever-growing data centers. “Amazon is world-famous for delivering products… so we spent time with Amazon to learn more about how we can improve our processes,” he said. “We turned our supply chain from a potential liability into a strategic advantage.”

AWS is dogmatic about security. The right badge and personalized pin is required to access a data center, as well as special locations such as the switchboards and power generators. And in case you are thinking of mounting a Mission Impossible-like mission to infiltrate an AWS data center, “many of them” are also secured by internal sensors that can be tripped, and are manned by a security force. Is it a problem or not? As an additional layer of security, video camera feeds are monitored in tandem by local and remote teams.

aws cloud hiding tall
aws cloud hiding tall – DCd

Multi-layer security

The data center must be secure before the first networking or server gear rolls into the facility. Hunter ticked off a list of items, such as the perimeter fence, building perimeter, doors and also a metal detector, which he says is managed around the clock. If you are ever invited to visit an AWS facility, don’t bring a storage device. Hunter said any disks that are introduced into the data center are tagged and closely monitored. “So, once something goes into the data hall, it doesn’t come out without setting off the alarm.”

And when it’s time to retire a storage drive, it gets degaussed first, before being shredded to bits of metal. “This is the only way storage drives leave our data centers,” he said. “We don’t RMA [return] them for repairs. This is not a place where we cut costs; ensuring that customer data is secured is our first priority.”

The attraction of the public cloud is inexorable, and many enterprises are looking to hybrid deployments to balance their compute needs with regulatory compliance. Will other cloud providers eventually get large enough to necessitate their own custom-built everything? And if so, will they be more generous in sharing about their proprietary hardware and how to do data centers better?

Hunter’s presentation was heavily scripted, and peppered with generic facts and figures about the cloud. Showing Google’s kit was not a big deal, but there could be no better illustration of how AWS continues to hold its cards close to its chest. The company declined to comment for this story.

This article appeared in the November 2015 issue of DatacenterDynamics magazine