The ability to quickly adapt, pivot and change is, according to Forrester, one of the best ways to maintain a competitive advantage in a world of global markets and instant communication.
But agility is impossible without access to the data that can make that happen. This access needs to be possible around the clock, because competitors don’t rest. And yet it only takes a single outage to bring it all crashing down, hampering not only longer-term strategy and decision making, but the day-to-day critical services that customers rely on.
We’re in an age where market leadership is governed by the ability to act on data, and customer trust is built by offering consistent always-on service. Unfortunate incidents suffered by the likes of PlayStation Network and TSB are clear examples of why businesses can’t accept the chance that the lights might go out. If customer trust is lost, their customers will go elsewhere.
This challenge is more difficult to address in practice than it seems on paper. As cloud partners and third parties increasingly take much of the service and availability load, securing consistent quality of service isn’t just about the company itself having its own backup and recovery capabilities in place.
They also need the reassurance that their partners are equally prepared. Thousands of companies have little IT capability of their own and are therefore almost totally reliant on external service providers to meet their availability targets.
Long term impacts
Despite the pressure, major cloud providers aren’t immune from downtime and disruption incidents of their own. Our latest Availability Report found the average cost of downtime totals more than £16m, not something a typical company can take in its stride.
As the saying goes, fail to prepare, then prepare to fail. Having a plan in place to follow if an incident occurs will prevent a bad situation turning worse. According to IDC, 80% of businesses that don’t have a disaster recovery plan in place will simply collapse when an outage hits, and that’s not to mention the longer-term damage from loss of customer trust.
So how can businesses get started on this path? Getting a clear understanding of where disaster recovery sits within their overall strategy is the initial step. Using impact assessments to identify the apps and processes critical for maintaining consistent quality of service is also highly useful. By calculating the maximum amount of time their business can stand for each of these, ideal recovery targets become a lot clearer and the disaster plan can work backwards from there.
Businesses also need to factor in the partners they might need to action this disaster plan, and this choice shouldn’t be taken lightly. There are a range of factors to look at when choosing a potential cloud partner. There needs to be clarity on uptime guarantees, service request turnaround time, as well as any points around fees and compensation that might lurk. Questions of legal compliance also can’t be forgotten, and any service provider worth considering needs to demonstrate that they are fully compliant with legal requirements wherever they operate.
The location question
In a world where technology has overcome many physical distances and divides, data still has to be physically stored somewhere. Where this ends up will impact a company’s ability to react to incidents, as there are strengths and weaknesses to each location and format, whether on-premise or offsite. one approach is to keep three copies of data on two different media, with one offsite, this can be helpful to work from when making the call on locations.
Offsite data centers hold great appeal, with their convenience and optimal environment for servers and equipment. Security and tech support is always on hand, and scaling up and expanding using the cloud is as easy as simply renting more space. But all of this is nothing without being able to trust the provider, so businesses really do need to carry out their due diligence. The advantages are also impossible to realize without strong network access, so the onus is on the business to make sure sufficient bandwidth and latency are always in place.
A well-oiled machine
Planning for the worst is one of the best ways a company can bake reassurance and readiness into its business. The process of restoring services becomes much more well-oiled, but just having a plan in place is still not enough. Backups and fallback systems also need to be regularly tested. Using a service that automatically verifies the quality of those backups can provide huge peace of mind. Once businesses have mastered the basics, backups can then offer even greater value, creating on-demand sandbox environments to test patches, updates and new ways of working without disrupting operational environments.
Industry competition knows no borders. It’s fiercer than ever. Agility is the only way to respond to these powerful market forces, which is why it has become such a major competitive differentiator. But the ability to act quickly can disappear equally as fast with a system outage, delivering catastrophic consequences. With their business models built on trust and quality of service, they cannot afford to grind to a halt. Planning for the worst is imperative and recovery is key.
This is the new reality of data.