Edge computing has been one of the major trends of the past several years, as applications have started to require lower latencies, and the volume of data handled by endpoint systems has grown to the point where streaming it all back to a cloud data center may be too costly, slow and bandwidth-hungry.
But one of the issues with Edge computing is that it is a fairly nebulous term that means different things to different people. Does the edge of the network refer to endpoint devices, or to the communications equipment that links such devices back to the core, or does it cover both of these examples and more?
Gartner, for example, defines Edge computing as solutions that facilitate data processing at or near the source of data generation, but goes on to add that Edge computing serves as the decentralized extension of the campus networks, cellular networks, data center networks or the cloud.
This article appeared in Issue 40 of the DCD>Magazine. Subscribe for free today
We don't all need micro data centers
For the telecoms industry, edge computing has been closely identified with the development and deployment of 5G networks, with their goals of handling data rates of gigabits per second, minimal latency, and the ability to support a large number of simultaneously connected endpoint devices. These requirements are expected to see cellular base stations increase their amount of compute power so that they effectively become miniature data centers.
Meanwhile, enterprises and service providers have also been investing in so-called micro data centers in order to serve the needs of Edge computing. These micro data centers vary in size, but a typical product is the equivalent of a data center rack with power distribution units and cooling encased in a protective enclosure, which can be populated with standard rack-mount servers, storage and switch kit.
Such solutions are perfect in a factory setting, for example, where a significant amount of compute power is required to monitor and control production lines, especially where multiple machine vision systems are employed, and fixed wiring is likely to be already in place for communications and power.
However, Edge computing covers such a broad range of applications and use cases that no one solution fits every problem, so a broad spectrum of capabilities is needed to fit every niche, and many will need to be more compact and have different capabilities.
“There's actually a hierarchy of processing that you would want as you move from the edge of the network all the way into the core,” says Kurt Michel, senior vice president of marketing for Edge infrastructure firm Veea.
Veea develops what it refers to as smart edge nodes, which can start with a deployment of just a single node but can scale by adding more nodes if required, as nodes can communicate with each other via a built-in mesh networking capability. Each node is a tiny box that looks like a WiFi access point, but contains a 64-bit quad-core Arm processor running Linux.
According to Michel, this model emphasizes both computing and connectivity, which is important for Edge applications, but the nodes can operate as if they were a single system via mesh networking.
“These separate nodes, you deploy them, and they will connect to each other. And what they do is they basically create a single virtual, connected compute platform. And they can connect to all of your different IoT-type devices, so cameras, thermal sensors, air quality sensors, vibration sensors, and the ways they connect might be Bluetooth, or LoRaWAN or ZigBee, or WiFi, or just plain old physical Ethernet,” he says.
Because the hubs operate as a distributed system, any IoT devices connected to any of the nodes is visible to and can be accessed by applications running on any of the other nodes. It also means that the devices can share workloads.
“The applications themselves run in Docker containers. And that makes these applications incredibly portable. So you can move them from one node to another node. And if you find a particular node becoming overwhelmed, you can deploy another node in that location,” Michel explains.
One upshot of all this is that a mesh network can provide a decent amount of aggregate processing power if needed - perhaps as much power as a micro data center - but that is not the way they are intended to be used. Instead, they are aimed at fitting into locations such as smart buildings, retail outlets or outdoor smart city environments, in sites where there may not be the space or power available to support a micro data center.
The range of applications that such devices might be used for is diverse. Michel cites the example of a retail outlet that might have a node connected to a security camera monitoring the entrance to the premises. The device could run a machine learning visual recognition model to detect people entering and whether they are wearing a Covid face mask, and generate an alert if not.
This hypothetical example illustrates some of the justifications for such edge deployments; streaming the video back to a cloud data center for processing may introduce unnecessary delays in generating a response, and incur unnecessary costs in network bandwidth.
“Anything that requires real time responsiveness, any control systems for robotic systems, industrial factory settings, whatever, all that real stuff that really can't handle the delay that going back to the cloud gives you,” Michel says. “You have just got to find the balance, you basically take your tasks, and you break them up into the things that need a rapid response and the things that require deeper processing.”
Many hardware types
It isn’t just specialist vendors that are looking to address the broad spectrum of device requirements that edge deployments encompass. In March, Lenovo expanded its range of ThinkEdge systems with a pair of ruggedized devices, the ThinkEdge SE30 and ThinkEdge SE50. Both are essentially PC hardware in compact enclosures designed for harsh industrial environments, but can be configured with 4G or 5G wireless modules in addition to WiFi, and feature RS232/422/485 serial ports for industrial peripherals.
However, products such as these largely leave it up to the user or a systems integrator to provide a suitable software stack for their Edge computing application, whereas a specialist like Veea offers a turnkey Edge node platform that allows the user to focus on making their application work.
Edge computing has been enabled by advances in computing that make it possible to add intelligence almost anywhere, and also by the spread of pervasive communications networks. But organizations need to take care when deciding whether Edge or cloud is the best place for data processing to happen, and also when choosing an appropriate platform from the wide choice available.