If you want to get a lot of data from A to B, it is often quicker to send a disk or a tape or a whole NAS box by FedEx than to wait for the data to transfer over the net. It’s called SneakerNet, and we’ve known about it for a long while.
“Never underestimate the bandwidth of a station wagon full of tapes hurtling down the highway,” said the great computer scientist and educator Andrew Tanenbaum, back in the 1980s. In those days, international bandwidth was hard to come by, and some countries relied on Sneakernet: in the 80s, the Australian part of the Usenet bulletin board received files from the rest of the world by courier.
Sneakernet… and pigeon-net?
In 2009, a South African tech firm sent a carrier pigeon 60 miles from Howick to Durban carrying a 4GB USB stick, while sending the same file over the broadband network. The data carried by pigeon was uploaded in two hours, by which time the broadband network had only transferred 4.2 percent of the file.
You might think that faster network speeds will eventually do away with Sneakernet, but the size of storage keeps growing as quickly as, if not more quickly than the bandwidth available on networks.
On the What If? blog, Randall Munroe of XKCD fame, answered the question “When - if ever - will the bandwidth of the Internet surpass that of FedEx?” His response was that even given the zettabytes of data which the Internet will soon be carrying, it will be 2040 before an equivalent amount could not be delivered using boxes full of laptop hard drives or SD cards. And then, since storage densities are always increasing, the Internet may never catch up with what can be carried on Sneakernet.
It will be 2040 before the Internet can deliver more than the theoretical bandwidth of FedEx - and that’s assuming storage technologies don’t improve
This all might sound like absurd theorizing, but Google recently backed up the data from the Hubble space telescope using hard drives sent through the post.
AWS sends in a truck
And last week Amazon announced the most extreme Sneakernet implementation yet known: Snowmobile is a 45-foot shipping container which can be filled with 100 petabytes (100 million gigabytes) at a customer data center, and then driven by a semi truck to an Amazon facility where the data will be uploaded.
Delivering that much data over a 1Gbps connection would take more than 28 years, Google calculates. Wired worked out that driving that data from New York to San Francisco at an average speed of 65 miles per hour, gets the data there in 45 hours, at a speed of nearly 5,000 Gbps (or 5Tbps).
Of course, the total transfer must take into account the time taken to fill the truck with data and empty it at the other end, but that can be made very quick: plugging multiple 40Gbps fibers into a switch, Amazon reckons it can achieve a terabit per second (1Tbps) and fill the container in ten days.
So the total time would be roughly 22 days to transfer the data, and the overall speed would be around 0.4Tbps. That’s not much short of half the LAN speed, which makes sense since effectively the main component of the process is the two LAN transfers that upload and download the data into the box.
You could look at Snowmobile as a warp in Internet space, that brings the two data centers . closer together in WAN terms, so data can be moved at LAN speed between them. Of course, in the process, it doesn’t do anything for latency, as that’s a function of the speed of light.
Snowmobile was a showstopper in AWS CEO Andy Jassy’s re:invent presentation, buyt it’s not a stunt. Amazon already has customers for the product, including a satellite imaging firm.
A version of this story appeared on Green Data Center News