Back in 2014, a Gartner report said that the Internet of Things (IoT) will pose seven challenges in the data center: sheer volume of data, server technologies, data security, the data center network, consumer privacy, need for higher availability and increased data processing requirements.
Now, the impacts of IoT are coming into sharper focus,in the data center and across the network. Some of Gartner's predictions hold true, but we have a better idea as to how they might play out.
Latency and reliability
In a world of always-on ubiquitous connectivity, latency and reliability loom over everything, whether you’re talking about self-driving cars or Industry 4.0. These two challenges are driving much of the change that we’ll see in network design over the next few years.
If the industry is to realize the promised benefits of IoT, we must increase the ability to support more machine-to-machine communications in near-real time. In applications like autonomous vehicles, latency requirements are on the order of a couple of milliseconds. GSMA, the international association for mobile technology, has specified that 5G's latency should be 1 millisecond, which is 50 times better than 4G's current 50 milliseconds.
Satisfying these requirements involves a radical rethink about how and where we deploy assets throughout the network. For example, routing and backing up data using a traditional star-type network design will become increasingly unfeasible. The vast amount of traffic and the latency demands would easily overwhelm a north-south data flow. So topologies are being re-designed to provide more east-west connectivity.
Link reliability will be every bit as critical as latency. This will involve multiple failovers wherever that data is being transported. For vehicle guidance, for example, the job of collecting, processing and storing the information may be shared among an assortment of curbside micro data centers and smart-city-enabled street fixtures.
Compute/storage capacity moves to the edge
Traditionally, when we needed to go faster we increased bandwidth. Eventually you get to the point where you run out of bandwidth, even on optical glass. Given the amount of data we’re talking about with IoT, that time will be sooner rather than later. One of the few tools left is the ability to decrease the distance the data has to travel.
So IoT data is increasingly being processed at the device via the SoC (system on a chip) and stored at the network edge. Alternatively, the device may send the raw data directly to compute/storage assets at the network edge for processing and storage. In either case, this allows network operators to increase the link capacity between the device and the compute/storage location.
Supporting all these edge nodes means deploying more mesh-type designs that can meet the required failover reliability and latency requirements. Each node will need multiple service delivery points and parallel peer-to-peer connectivity, meaning a lot more fiber. On the other hand, a side benefit of this design will be reduced traffic on the backhaul network, since only data that is needed will have to be backhauled to the data center.
Standardizations to drive and scale development
M2M communications requires a high degree of automated service delivery and resource allocation, creating challenges for network security, API security and identity management. Organizations such as IEEE and the OpenFog Consortium are working toward standards for automatically authenticating each node on the network without human intervention. To be effective in a vendor-agnostic network, these solutions must be integrated into all the sensors, devices and other IoT hardware. That will require buy-in from the OEMs.
The need for standardization is also driving changes in infrastructure. A near-future goal of 5G, for example, is to enable virtual network slicing. Dividing the infrastructure into independent virtual networks enables operators to create an independent standardized layer above the control plane, from which they can deliver proprietary value-added services. A major challenge is prioritizing and routing the traffic to ensure that any operator-specific service would operate within the same SLAs on every other provider network.
The challenge isn’t just bandwidth — using techniques such as wave-division multiplexing (WDM) or coherent transmission, enough bandwidth can be created — but it would also require standardizing parts of the providers’ infrastructure to support the virtual network slicing. It’s an issue of cooperative design. This type of standardization would eventually lead to the development of off-the-shelf modular network components that could be used to dramatically reduce the time and cost of maintaining the network and reduce mean-time-to-repair.
More clarity and even more questions
It will most likely be a few years before we see the kind of broad scale IoT deployments that will warrant the changes mentioned here. But as the pieces start to fall in place, the rate of change will accelerate. As far as timing, industrial applications are already beginning to emerge and more will gradually be introduced based on their ability to demonstrate ROI.
Service providers may be a bit ahead of the curve, thanks to their experience with more edge-based processing, storage and delivery systems. While much of their investment in the access network is in optimizing their radio networks (xRAN), it’s unclear how much of that knowledge can be transferred to support the IoT ecosystem. Who will be at the new “beach front” first and what will they need when they get there? For all the clarity we’ve gained since 2014, there are still lots of questions.