Edge isn’t a new strategy; the likes of Netflix has been deploying Content Delivery Networks (CDNs) with internet service providers (ISPs) local to our communities for years, right under our noses. And in times gone by, these Edge opportunities might have been reserved for the Netflix’s and Facebook’s of the world, but now, the proliferation of 5G has enabled other organizations in other industries to take advantage of this Edge strategy.
Businesses across the globe are now able to put real-time processing in geographies where they didn’t necessarily need it or weren’t able to in the past, but boy do they need it now. Today, workloads around decentralized machine automation, machine learning, and data warehousing, need a real-time response that centralized facilities just aren’t equipped to handle.
As a society, the Netflix model really has spoiled us; we expect to press play and go. Buffering isn’t a thing anymore, and the expectation for low latency connectivity has spilled over from the non-enterprise.
But with the sheer plethora of data being fed back to the centralized data center, the facility is simply unable to cope, which doesn’t bode well for the kind of experience we’ve come to expect from our digital services.
In this article, we pick the brain of Shad Sechrist, data center solutions engineer at Belden, who tells us what it takes to live life on the Edge (successfully).
Breaking it down
When we have thousands and thousands of IoT and Edge devices feeding data back into a centralized data center, you could (for example) have data from Florida flowing back to Portland, which would of course cause considerable latency and a bottleneck once it reaches that centralized facility.
The ability to break these data streams down into individual Edge nodes not only gives the centralized data center a much-needed breather, in that you’re not plying it with huge streams of data all that once, but provides you with that all important resilience should anything go awry. If one of those Edge nodes goes down, you can simply feedback to another one. Ultimately, utilizing the Edge is a much more powerful way to ingest data.
Many industries have realized this, and the Edge has been implemented beyond the services and sectors you might traditionally consider ‘digital’. Healthcare, for example, is now rife with wearable tech, where any delays in data processing could lead to dangerous miscommunication. Even agriculture has jumped on the bandwagon, implementing AI and machine learning to monitor ground moisture, as well as drone data alerting farmers to animal or human intervention on crops in real-time, resulting in more yield and less waste.
On a larger scale, you have use cases such as smart cities. Lake Nona in Florida currently boasts smart streetlights, smart traffic signals, self-driving shuttles, and even monitors people’s patterns around the city to eliminate traffic congestion as it happens. In this instance there is so much data to aggregate, needless to say if it were streaming back to a data center on the other side of the country, this smart city wouldn’t be so smart. In fact, it’d be downright dangerous, which is why the data needs to be processed in real-time, on the ground, in that specific location.
For success at the Edge, the layer 2 network is key to reduced latency. “If you think of the layer 2 network, it’s your carrier, so in the UK that might be British Telecom, in the US it might be AT&T or Verizon,” says Sechrist.
“That’s the layer 2 backbone, that all these companies have developed to link all of this fiber together across the country. They’ve connected that fiber in many different ways, then through software-defined networking or an SD-WAN type solution, they’re able to control how that data flows, limiting latency issues by limiting the number of hops.”
For the uninitiated, what we mean by ‘hops’ is, say you’re in Mississippi, and you’ve got to get something to Seattle, how many different ‘hops’ do you have to go through to get to that point? Do you go down through Texas, then work your way up through Denver, then up to Seattle? Or up to Chicago and then down? There are a lot of different ways that information can go, so you want to optimize it as much as possible, and that’s where the software defined networking or SD-WAN solutions come in.
Belden connects devices to the layer 2 network, which then connects all the other hybrid resources you have. At the end of the day, if you’re a company looking to put together an Edge solution, you’re not going to go out and build your own layer 2 network.
For starters, it’s expensive (to the tune of hundreds of millions of dollars) and it’s time consuming, unless you’re the likes of Google or Facebook. In fact, Facebook has put a point of presence in almost every colocation facility, in every city around the world. They’re dropping in a rack or a couple of racks in all those locations, to create the network that they want (and because they can.)
But unfortunately, most companies don’t have the talent, time or money to develop such a network, so they utilize third-party sources, which is essentially what the cloud is. It’s out there, running through someone else’s network. Cloud compute and storage is basically infrastructure-as-a-service; after all, you don’t have to manage it, you simply get resources from that cloud provider, whether it be compute power, or storage power, then you pay for the use of that data.
And in an industry rife with skills shortages, when you don’t have the talent, these third-party resources come in very handy. Utilizing other carriers who have already built the layer 2 network and are in a position to maintain and optimize it, is not only cheaper than building your own, but it’s someone else’s responsibility to run.
Automating the Edge
Hybrid Edge data centers are a good example of what we call ‘lights out data centers’ i.e, unstaffed data centers.
Lights-out may not be mainstream yet (although many IT departments already work with technology like software-defined networking [SDN] and virtualization), but Covid-19 showed many operators what they might look like. In many cases, the pandemic proved that data centers could still operate with much less human involvement than originally thought.
When trying to do anything at the Edge, automation is key, because you’re not going to have people there. You’re going to need to make adds, moves and changes as your mission changes and things happen.
Utilizing the right media (copper or fiber from Belden’s perspective) a solution can be designed either via a full mesh or leaf-spine type topology. But the challenge here is, Edge varies. Edge could be one cabinet; it could be four. So in order to create a solution that best suits the application, companies like Belden need to ascertain how that data is going to be used. That is ultimately what’s going to determine what that design looks like.
Benefits of a leaf-spine topology
Implementing a leaf-spine topology, creates a redundancy through the row itself, through the leaf-spine architecture, meaning you’re not actually adding a lot of additional switches, you’re connecting the switches to all the other switches from an access standpoint and then going to another layer within that.
Sechrist explains, “You’ve got access layers connecting to the servers, aggregation switches connecting to access switches, which connect to core switches. So, you’ve got leaf switches that you’re connecting to all your devices, whether it be compute, network or storage, that’s in the rack itself, and that’s connected to another rack.”
Essentially what you’re doing is creating a mesh with multiple pathways for any of the information to go. “When you do a hyperconverged solution like this, most of your traffic tends to move from east to west and remains within the row. This means you’re not having to go out and find storage elsewhere, you can aggregate all that data into one place, moving onto the aggregation or distribution layer when it goes out to the consumer.”
This is great for reducing latency from your east-west connections because you move one layer of that network topology out. Traditionally, you have three layers, but by utilizing a mesh or leaf-spine architecture you have two. You’re removing that one intermediate cross-connect piece.
To further facilitate automation, you could eliminate further cross connects by going with a full mesh solution but, if you’re going to automate that, you could consider putting a cross connect at the main access layer before you get to the layer 2 network.
This enables you to make adds, moves and changes from that one point and use port utilization across the entire network, so that when you do come to make any adds, moves or changes, it can be done via software defined networking on the fly, and you’re not having to have people go in to make those direct connection changes.
Despite the advantages, with a spine-leaf topology, unfortunately there’s a lot more cabling going into it because you’re connecting a lot more devices. You’re connecting every leaf switch in that row to every spine switch before it gets to the cross connect and goes out to the layer 2 network. All of which makes things a little more complex.
Fortunately, Belden is a trusted advisor in this field and can help you bring your solution to life. From the cabinet to connectivity, to design and power, Belden can piece all your equipment together, and, as well as manufacturing all the products that go onto the raised floor space, can ultimately connect all your devices to the layer 2 network.
“And that’s the hybrid piece of all this,” says Sechrist. “We’re not utilizing one centralized data center to do everything, but utilizing the right tool for every job. And whatever that job is it doesn’t necessarily have to be in a centralized location if you have that layer 2 network behind it bringing it all together.”
The right tool for the job
Edge isn’t a one-size-fits-all solution. The data’s use defines the architecture. Different applications, environments and use cases require different solutions.
You need to be asking, how is the data going to be used? Does it need to be used quickly? Is this a latency sensitive application? Then, what are we going to do with it? Is it going to stay within a metro area or go off somewhere else, and where?
“From the perspective of a build application engineer, we want to be challenging our customers by asking a lot of whys. Why are we doing that? Have you thought about doing it this way? To get them the right solution, the right architecture, and the right product.
“As we dig further into cabling architecture, we’re going to start asking things like, how much bandwidth do you need? What’s your upstream, what’s your downstream? Are we 400G hyperscalers, or is this a typical enterprise location where we might be doing 10G and 1G? 10G at the access layer and 1G at the rack layer.
“A lot of that will drive what we put in, whether we use copper, fiber; what happens if the mission changes for this location? Can we accommodate that with the same equipment, the same architecture and how do we make that change if needed?”
With anything, you’re trying to solve a problem and it all comes back to the application that you’re utilizing.
“What I’ve seen in my 20+ years is with Edge, it changes based on the problems you’re trying to solve,” muses Sechrist.
“If you go back 20 years ago, everything was very centralized. Now with some of these streaming options and how we’re utilizing the internet and the bandwidth we’re using, we’re moving to everything being very decentralized. I think 5G will decentralize products even more, as we see more things going into towers and more towers will have to go up to truly get a 5G experience.”
Sechrist concludes, “Belden will continue to evolve to adapt that cabling architecture to connect those devices to the networks, and being able to develop those networks to the speeds we’re trying to get to. Ultimately, Belden’s hybrid Edge evolution is guided by customer need, as 5G and 6G comes about, as driverless vehicles come to the fore and new technologies become a necessity.”
For more information check out Belden's 'Automating the Edge' webinar.