In November 2018, Packet announced a partnership with tower operator SBA Communications to bring its bare metal cloud to the Boston suburb of Foxborough, MA. Below are lessons learned from this industry-first deployment.
Over the last few months we opened up a number of facilities, including new or expanded sites in Sydney, Pittsburgh, Phoenix, Seattle and Ashburn.
However, our data center in Boston was a bit different: instead of a traditional colocation facility, we deployed our bare metal cloud at a mobile edge data center that operated at the base of an SBA Communications cell tower site, just a stone’s throw from Gillette Stadium in lovely Foxborough, MA.
While some of our early users have remarked on how awesome it is to have infrastructure located in the heart of the Northeast, just milliseconds away from downtown Boston, we didn’t trudge servers through the snow purely for the sake of latency. Instead, we were looking for something more valuable at this stage of the game: knowledge and experience.
Failing, thoughtfully
It’s cliché to talk about shipping often and failing fast in the world of software development. By iterating quickly on a “minimally viable product” (MVP) in response to user and market feedback, you can quickly evolve and build the product around the users. At least that is the theory!
Doing the same thing in the physical world is a bit harder, of course. In the case of opening a small data center next to a football stadium, you have to spool up a pretty sizable chunk of capital and activate a wide variety of partners just to turn on a single server. That’s why I prefer the term “fail thoughtfully” - which emphasizes the act of prototyping and iterating.
Our goal with the Foxborough deployment (which our users call “BOS2” via our API) was essentially to deploy a first prototype: of our platform, logistical expertise, hardware model, partnership with SBA Communications, etc. As such, we measured the hours we spent in various phases of the deployment and looked hard for lessons that we could bring back and use to advance to the next level.
Foxborough was about learning, not latency. With that in mind, here are five things we learned!
#5. Electricity isn’t created equal
As a hosting company, we spend a lot of energy on energy! Power is a very real cost, and ensuring that we have redundant and high quality power is a standard part of any normal data center evaluation and deployment.
Of course, what we deployed in Boston is a bit different. Our partners at SBA outfitted the site with a BaseLayer containerized data center and procured the appropriate 3-phase 230V power feed from the local power company. This was delayed by a winter storm (as evidenced by the above pictures), which by law pulled crews to emergency repairs instead of new commercial installs. Additionally, power companies are naturally the king of all monopolies - there is no amount of prodding or pulling that can make a local utility company move faster.
As such, finding sites that have enough power and the right kind of power is super important. We're looking at more standard 110V power for future builds to widen the sites that we can go to and lower the time of installation dramatically.
(Note to self: remote crews sometimes turn off generators when they leave for the day - don’t let this happen while you’re using the data center!).
#4. Connectivity and site selection
A good friend once said that you need three ingredients for a great data center: awesome connectivity, cheap power and plenty of space. And you can usually get two out of three!
Of them all, connectivity is often the “long pole” in the tent and, given the local in-market nature of transport connectivity, this is often a buyer beware scenario.
Proper site selection done in advance with robust datasets should vet for the trifecta of space, power and connectivity. And with fiber, just because it is at the manhole or pulled to the site, doesn’t mean you’ll be allowed to use it! Knowing this before you deploy and having clear access to proper connectivity is critical to picking the right place to drop your edge.
#3. Here, hold my server: Depoting & logistics
Anyone who has ever installed a few dozen or a few hundred servers in a data center knows the pain of unboxing pallets of gear.
Even at a “normal” data center, this is a complicated procedure: you need storage space to warehouse the gear until you’re ready to roll; you have to tear apart and then clean up a lot of packaging (often with no trash area!); and it takes an enormous amount of manual labor. Then, down the road, when you need to RMA or send something back to a manufacturer, you are stuck trying to find a box and materials. It’s a royal pain!
Doing it at an edge data center is even more complex - you can’t really ship $500k of servers to an unmanaged lot with chain link fence around it! First you need a regional depot or logistics operator, and then transport to the site.
We think the big lesson here is around packaging, and that’s why we’re interested and working on standard, reusable and durable packaging for our Open19 and other server types. Just look at what companies like Bressler Group have done for logistics-heavy scenarios, and you’ll see what we mean.
#2. Look Ma, no cables
There is nothing quite like cabling servers in a freezing cold enclosure to drive home the point: we (desperately) need a new hardware model. In addition to removing the act of cabling, which is a tough job for professionals in even the best of conditions, we just need the flexibility that comes with low-cost, easy-to-service data center hardware.
Our Open19 investments should start to pay off this summer, and we’re looking forward to not only reduced initial install time but also the ability to cost-effectively move expensive hardware from one facility to another, or to send it back to a central facility for repair.
#1. Operator tooling
One of the most difficult things in our world is actually letting someone you don’t know play network janitor with your mission-critical infrastructure. Ideally, of course, everything gets installed properly and never goes wrong at a bad time, but Murphy’s Law ensures the opposite. Plus, servicing dozens or hundreds of sites with experienced field hands is nearly impossible.
While we can vastly simplify hardware by limiting cables and other complexities, there is still an incredible need for high quality asset management and tools for remote/field techs.
We are big fans of pairing detailed asset management with mixed reality VR so that even relatively untrained technicians can find the right needle in the haystack. This is incredibly important as we start to arm our partners like SBA Communications and their field service teams with the ability to install, fix or extract equipment from these far flung sites.
What’s next? Chaos, of course!
Since deploying our site in Foxborough (as well as two in Chicago), we’ve deployed a wide range, including in partnership with our friends at Federated Wireless. It’s pretty exciting to see what new class of wireless-powered, low latency users can do!
Alongside the use cases, we’ve turned on our inner chaos monkey, “stress testing” various “Day 2” operational issues: 2am phone calls, remote hands timelines and performance, the cost of adding N+1 server to a site, and more.