The inefficiency of network operations at large enterprises has become one of the main causes of unplanned downtime and related service disruptions. Whereas in years past unplanned downtime was attributable to enterprise-operated data centers, and their power and cooling resiliency, today more than half of all computing is hosted within the public cloud, creating a virtualized infrastructure where the network now plays the dominant role in determining IT service delivery.
Trouble tickets never come alone
Recently, a multi-national software provider based in Germany reported that its NetOps team averages more than 10,000 trouble tickets per month. After examining the resulting incident data, they realized that more than half of those tickets could have been prevented altogether if they only had a way to understand what behavior was expected of the network for each of their applications and services and then continuously compared it to the live network. By proactively identifying changes in desired behavior early enough, they could easily eliminate the majority of their unplanned downtime. And the fact is that preventing any production issue from occurring in the first place is far less costly than resolving that same issue once it has manifested itself into production.
In the cloud-enabled world, all enterprises need to start thinking about a more proactive approach to Network Operations based upon the continuous validation of the network’s ability to deliver the desired application services outcomes needed by their constituents.
So how do you start?
Get out of react mode
For years now, business leaders and IT professionals alike have struggled with the chaotic approach to NetOps which continuously runs in react mode. In this mode, they repair problems after they are reported by production users and problem resolution times may be days or longer for all but the most critical of problems. This is costly both directly and indirectly.
To proactively address network outages, we first must look at some of the root causes of outages. These include:
- Hardware Failures and Outdated Equipment: At some point or another, given enough time devices and hardware will fail. Things such as software bugs, power spikes, and poor maintenance can lead to device failures.
- Resiliency Problems: When the Texas power grid went down in 2021, the state’s network infrastructure failed and backup cell networks went down. This is not the time to test an organization’s resiliency and network failover architectures.
- Routing Problems: If an ISP goes down or configurations are changed, traffic can slow down significantly or stop altogether. While re-routing will occur, the performance of such sub-optimal paths can be prohibitive to business.
- Human Error: Many network failures are caused by IT techs and operators making mistakes or changing a configuration without realizing its full effect on other applications. The fix for one problem might have an unintended consequence for another.
A 2022 survey by The Uptime Institute found that over 75 percent of all service outages cost companies more than $100,000, with many respondents reporting this kind of disruption can cost more than $1 million per incident. Reputational and customer retention damage is more difficult to quantify but can also be significant. Networks running with reactionary operational plans leave large gaps in application performance and security, damaging business services across the board and opening the door for threat actors who can then access sensitive customer data, deploy ransomware, or more.
So, what can companies do? The answer: Prevent outages from occurring in the first place by using a no-code approach to automate the continual enforcement of all the desired network behaviors.
Prevention is the best line of defense, so make sure to establish a proactive verification strategy to identify potential problems and resolve them before they lead to network outages. It starts with using a no-code approach to articulate all the network behaviors or “intents” that must be in effect for business to operate properly. This allows your subject matter experts to share their knowledge in an executable fashion, without the need to involve programmers or development project teams. These intents may include certain types of application-to-application performance, a maximum allowable latency for interactive applications, secured access to information or devices, or a litany of quality-of-service requirements.
Given this extensive list of required intents, no-code network automation can be leveraged to enforce all of these intents, confirming that they are intact, and taking action proactively if they are not. For example, if two network devices are expected to be a mirror of one another, the proactive approach is to define an intent that compares the two configurations to assure they are identical.
This same “intents” based approach should be used to power your change management processes, since the network intents associated with every device must be tested before and after any changes are made, in order to prevent unintended consequences and the resulting service outages.
By managing networks by intent, not by device, NetOps teams can easily pinpoint problems in the making. Continuously verifying network intents through automation will detect when application performance drops or when security access is compromised. By addressing issues before they touch users, enterprises reduce their unplanned downtime.
Today, preventing network outages comes down to WANTING to do so. The no-code network automation technology is available and can put to work all of the expertise and experience most organizations already have. By establishing a proactive NetOps strategy and making no-code network automation readily accessible to all IT teams, the majority of network failures will be a thing of the past.