“I will have been in this industry for 30 years next July, and I am the most excited right now, because everything has been building up to this point,” Dean Nelson, head of Uber Compute and founder of Infrastructure Masons, said.
Talking at the DCD>London Day One keynote, Nelson laid out his infrastructure vision for Uber, the ride hailing and transportation services company that has grown rapidly over the past few years.
“Today we’re in six continents, we have 78 countries, 600 cities, 3m+ active drivers, 15m rides per day, 75m monthly active riders,” Nelson said, expounding upon the breakneck success Uber has experienced.
“We’ve had 10bn trips in eight years, five billion of which were in the last year. But that represents just one percent of the global miles driven in that time frame. So the key is: We’re just getting started.”
To handle its existing load, and its potential growth, Nelson’s team have turned both to colocation data centers, and the public cloud.
They call this ‘the tripod strategy:’ “It is not on-prem versus cloud, it’s not an 'or,' it’s an 'and.' When you get to scale, you have volumes where you can get to price performance that rivals cloud. But you can’t move like cloud can, we want to be able to say ‘turn it up’ and they have massive scale to do that immediately.”
He added: “The thing that has kept me up for the last two years is keeping up with this demand. Every forecast has been wrong, it’s been too small. The ability for us to leverage the best of and the speed of the cloud, but the price of on-prem is a really big strategic advantage. This is about serving your business needs.”
For on-prem deployments, which serve the majority of Uber’s workloads, Nelson outlined the standardized deployment the company relies on, called Uber Metal.
“Every server has a 25Gbit network,” he said. “Every rack has a 600G uplink. Each of those racks we have 16 of, which make a pod. We then have 30 pods, making 480 cabinets.”
That 480 is then bumped up further, with 32 racks for network and 64 racks for miscellaneous extras, “because we never know what’s going to happen,” leading to a total of 576 racks.
The company uses four types of rack: compute, database, storage (tiered storage with warm and cold), and GPUs for machine learning.
"One of the concepts we’re moving for is defragging the data center. The fabric architecture allows us to move things around, so that there is no stranded power," Nelson said.
With the concept explained, Nelson added: “There’s the makeup - go build it.
“As an industry we all need to work together. We’re in the right place, I’ve never seen this much demand, the conversations I’m having - the volumes show no sign of slowing.”
Uber shows no sign of slowing, Nelson said, with the company hoping to add tens of regions (clusters of three data centers), and 100s of zones (single data centers), including edge data centers.
“This means lower latency, but also more resiliency, so you can have data centers with lower tiers,” he said.