There has been a significant change in our lives since January 2020, with the practise of social distancing being an unknown concept prior to the spread of Covid-19, we now see that the role of working from home has shifted from being an option to becoming a necessity for many people around the globe. World megacities are expected to increase from 548 in 2018 to over 700 by 2030, and with this population growth and move towards an urban lifestyle, comes a corresponding growth in the number of hyperscale data centers from 504 in 2018 to a predicted 700+ by 2030. As people distribute across the globe, and as cities become more numerous, there will still be 3 billion people in rural settings. The question is, how do we distribute edge compute access and other infrastructure assets as well as figuring out what applications are going to push the need for 5G and edge?
As a result of 5G and IoT applications and lower latencies, we are going to see a drive in demand for things like remote surgery, shared, telehealth, telepresence, shared experience simulators, feature rich and augmented reality multiplayer online gaming, all of which are in environments where downtime will not be acceptable. Having an interrupt in delivery or downtime would result in lost money for operators, disgruntled customers and violated contracts. The ability for intelligent agents to monitor the physical infrastructure that’s delivering the 5G service will be critical for delivering the best user experience for everyone.
5G is predicted to grow to 3 billion connections by 2025. This growth will drive millions of edge locations which will be situated in colocation facilities, cell towers, cable and telco huts, on light poles, in homes, offices, factories, and in vehicles and sides of highways. Wirelessly connecting each of these through cable or fiber will be crucial, due to the speed and throughput required.
When you consider the number of layers of infrastructure that we are putting in place to deliver 5G, IoT, and to implement for Smart Cities and the complexity of the layers, we will require AI to build a real time picture that can be synthesized, processed and deliver intelligence. AI will be crucial across the layers of the infrastructure.
What is AI?
Government Technology in 2016 defined that there were four types of AI:
- Reactive machines (has no memory, only responds to different stimuli)
- Limited memory (uses memory to learn and improve its response)
- Theory of mind (understands the needs of other intelligent entities)
- Self-awareness (has human-like intelligence and self-awareness)
In a later article in Forbes in 2019, they describe seven types of AI, adding a further three categories:
- Artificial Narrow Intelligence (ANI)
- Artificial General Intelligence (AGI)
- Artificial Superintelligence (ASI)
Why do we need AI for edge computing?
Delivering AI as a service for end users will become a task of these systems and networks that we’ve deployed for 5G, IoT and Smart Cities. So where will all of this processing be done by wireless operators? In short, it will be done everywhere; in the edge infrastructure, in the core and further up the chain in the cloud.
Challenges faced when switching to edge computing
- Partitioning workload across locations in a timely manner – determining what can be done locally versus going back to the cloud without placing too large a penalty on latency.
- Physical security - this becomes more difficult when the infrastructure is located on a light pole or a wall-mounted box exposed to the elements.
- Managing capital expenditures for maximum return on investment - this determines build versus lease, hardware deployments versus running on someone else’s infrastructure.
- Access – by its very definition, edge computing occurs away from central locations where experts may be located that can repair hardware and software. Implementing remote monitoring and management becomes critical in this environment where speed and downtime are of the essence.
- Training – teaching people to operate and troubleshoot at the local level where an edge operator may not have local personnel in place. The learning curve may be an impediment to rapid deployments.
Knowledge isn’t power until it is applied
Edge data centers need the right rack power and connectivity solutions in place to enable the infrastructure that’s bringing AI and edge computing. For widely dispersed networking, storage, and computational assets to work reliably, the underlying hardware needs to be continuously powered on, or else capable of being remotely managed. The vast majority of the power will need to be remotely monitored and managed through automation to ensure uptime while maintaining efficiency and avoiding costly truck rolls. Switched PDUs, capable of switching individual outlets on and off via commands passed through either an on-board Ethernet or serial interface, will be a core requirement. Implementing infrastructure that supports remote access and control of doors, IP-based cameras, environmental sensors for temperature, humidity, smoke, floor mount leak detectors and air pressure will also be a key line of defence.
Equally, power management and monitoring solutions need to be both flexible and scalable in order to adapt to constantly changing environments and growth requirements. Overhead power distribution supply systems, which include continuous access slots that enable power to be tapped at any location, offer the flexibility to be able to adapt to changing layouts. Innovative solutions like the HDOT Cx PDU, which offers a hybrid C13 and C19 outlet, capable of accommodating both a C14 and C20 plug, limits the need to purchase a new set of PDUs when equipment changes, as it adapts to changing needs.
Whether you are powering a smart city network, supporting a new 5G implementation, or involved in scaling the Internet of Things, you’ll get close to the edge. All of these systems require support, and in turn, if knowledge is power, then everything needs power.