For the last 20 years, digital infrastructure has been centralizing, and the results have been incredible. Resources in large shared data centers are massively more efficient than those in small server rooms - and therefore massively cheaper. As a result, we now have access to an undreamed of wealth of online services, essentially for nothing.
But in the last few years, another strand of digital infrastructure has emerged. New applications, including streaming media, the Internet of Things, virtual reality and connected cars, will require large amounts of data delivered with very low latency.
For this reason, we are told, the world needs a new layer of infrastructure - Edge resources, close to the end user or devices they serve. The industry has got excited about this, with Gartner analysts predicting that enterprise data processed at the Edge will surge in the next four years, from 10 percent today, to 75 percent.
In response, vendors have stepped up: their main proposal to deliver the Edge is through modular “micro” data centers, lockable cabinets holding a single traditional rack or half a rack, complete with its own power and cooling.
This article appeared in the July issue of DCD Magazine. Subscribe for free today.
What is the cost?
But do the costs stack up? The trend towards centralization was driven by the economies of scale, which made webscale data centers cheaper to use. Edge will push applications back out to resources which must, surely, be more expensive. Edge applications must surely end up paying more for resources
At this stage, Edge economics are a series of informed guesses, but Duncan Clubb, director of IT consulting at CBRE, agrees that “Edge facilities will naturally be more expensive than the ‘traditional’ cloud or colo services.”
Schneider Electric disagrees, claiming that that Edge resources can actually be cheaper than centralized cloud services. “A centralized 1MW data center is $6.98/watt and a distributed micro data center is $4.05/watt,” according to a White Paper, Cost Benefit Analysis of Edge Micro Data Center Deployments.
The paper compares the capital expenditure (capex) of two alternatives: a “traditional” single data center with 200 5kW racks in a hot-aisle arrangement, and an Edge arrangement where the racks are installed in a set of micro data centers in different buildings, each one containing a single 5kW rack.
The Edge option comes out cheaper partly because micro data centers can be deployed cheaply in conventional office space, where power provision and real estate are sunk cost, while centralized data centers need more capital expense.
“This analysis is still somewhat hypothetical,” admits Victor Avelar, director of Schneider’s Data Center Science Center, which produced the paper. “However, I stand by the fact that when you locate a micro data center in an existing building, there’s infrastructure that you get ‘for free’ because you don’t need to build the building, the electrical distribution, generator, lighting, etc.”
On one level, Avelar says the analysis could overestimate the savings from shifting edge into micro data centers. The study assumed that micro data centers would not have 2N redundancy in their power and cooling, because in practice they generally don’t. In order to have a fair comparison, it also used a 1N specification for the centralized facility, which in reality would always have some level of redundancy.
Limited study
However, there are some issues with the study. Firstly, it used a “traditional” centralized data center, so it will have missed the economies of scale that hyperscalers get from novel architecture in even bigger facilities.
Secondly, it does not cover running costs and operational expenditure (opex). Centralized facilities make significant savings here. Cloud data centers are built where electricity can be bought in bulk at a favorable rate, while micro data centers have to accept the local electricity rate that applies in their building.
Cloud data centers also consolidate IT loads and storage, so the amount of hardware needed to run a given application would be less in a centralized site.
There’s another extra component of Edge costs, and that is management. “If we really have mass produced edge computing, that is everywhere, there are not enough people to have dedicated facilities guys, managing these operations,” said Suvojit Ghosh, managing director of the Computing Infrastructure Research Centre at McMaster University, at DCD>New York. “It’s not a matter of cost. You can’t have one person per site, because there aren’t enough people.”
Edge facilities are therefore designed to be as autonomous as possible, and remotely manageable. Software is monitored and updated remotely, and hardware fixes are done by untrained stuff installing kit sent out by post. But there will still be an overhead in the cost and time of sending and managing these virtual and physical updates.
Set against these points, Avelar reminds us that Edge applications have specific communications needs. Placing them centrally will hurt them: as well as increased latency, they will have a higher cost to communicate to end users and devices.
This is a good point: Edge applications are different to traditional ones. Practically, they may do more communications than other applications, and their latency demands may be absolute. For instance, virtual reality goggles must respond to a change in the user’s gaze in 10ms or less, and self-driving cars must obviously respond just as quickly to avoid obstacles.
This doesn’t just affect where the apps are loaded, but also how they are structured, because part of that latency is in the operation of the application itself. “Applications that require low latency need to be ‘always on’,” points out Clubb. “Low latency needs software to be loaded in memory and available for immediate response. Latency is destroyed by context switching, swapping or loading activities, in which the compute resource has to go and load up software into the processor from somewhere else in order to respond to incoming data.”
This means that, wherever they are operating, Edge applications have to be in high-grade computing space, says Clubb: “Ideally in a core and its memory, not sitting on a disk or in the wrong type of memory ready to be swapped in on demand.”
Avelar divides Edge workloads according to whether they are compute-intensive or storage-intensive. Compute-intensive applications have a latency limit of around 13ms, while storage-intensive applications vary depending on how the data is replicated. If real-time replication is needed, they demand 10ms of latency.
These needs affect the distance from the IT resource to the end user or device, says Avelar. Storage-intensive workloads must be within 100km (60 miles) of their consumers, while compute intensive loads can be 200km to 300km (120 to 180 miles).
Both these figures are higher than the pictures usually conjured of Edge deployments. For instance, a regularly-cited model would place an Edge micro data center at every cell tower. While a cell tower has a maximum range of 70km (45 miles), they are typically spaced 2-3 km (1-2 miles) apart in populated areas. They are even closer together in cities, and likely to become even closer as 5G radio technologies arrive, with a much shorter signal range.
The picture becomes more complex when you consider that applications aren’t monolithic. Developers are likely to make sensible decisions about different parts of the code, says Clubb. “In practice, I expect to see the developers and owners of apps that will use the low latency aspects of 5G to split out the low latency code and deploy only the smallest necessary functions into Edge DCs, with normal cloud or data centers providing the majority of data processing and storage.”
Higher-cost Edge resources may end up running just ten percent of the overall Edge application, with 90 percent of it still running in a remote back end on an optimized virtualized server, using low-cost electricity.
Ghosh says: “I don't think the cloud is going anywhere. We always need that compute in the background to process data that doesn't have the latency requirements and does require economies of scale.”
Premium pricing
Edge applications will be priced at a premium, Ghosh says, though he predicts that this premium will reduce as hardware evolves to meet Edge needs better.
For now, however, Clubb predicts there's a need to deal with reality: “Edge compute infrastructure will be expensive, well spec’d and with huge amounts of memory. Pricing models will be distinctly more expensive than normal cloud instances.”
This realization will weigh on the minds of those planning to roll out the first wave of Edge. Before Edge can truly take off, their applications will have to find a way to quantify that cost, and adopt a business model that justifies it.