A stellar disaster recovery strategy is crucial to the success of a business. The ability to recover mission-critical data in a timely manner is something businesses expect from their IT teams. However, some disaster recovery plans aren’t as good as others and may need to be overhauled. Here are the four reasons you need to reevaluate your disaster recovery plan.
1. You may be overestimating your current capabilities
How long will it take your IT team to recover in the event of a disaster? 30 minutes? Two hours? More? Although you may think you know the answer to this question, your expectations might not align with the reality. According to a February 2016 ESG Research Report, The Evolving Business Continuity and Disaster Recovery Landscape, 35 percent of responding businesses expected to be able to recover from a disaster in 15 minutes or less. However, only 6 percent of respondents were actually able to realize these recovery time objectives, while 30 percent of recoveries actually took between two and four hours. The onus will increasingly be placed on IT teams to find data protection solutions that are able to recover from disasters quickly, and these solutions will need to be validated to prove that the actual recovery time objectives align with business expectations. The only way to truly know how long it will take to recover without actually going through a disaster is by way of testing.
2. Security will continue to be a focal point
Ransomware is an ever-present threat. And it’s not going away anytime soon. Attacks continue to increase at a rapid rate and hackers are becoming more and more daring, attacking small businesses and high-profile enterprises alike.
Amazon’s 2013 IT outage cost the company $66,240 per minute, and high-profile outages have damaged several airlines’ reputations and finances.
Every business needs to have a plan for when ransomware strikes as being prepared is the best way to thwart attackers. According to Gartner, “Once files are encrypted, organizations have two choices: restore from a backup or pay up.” Knowing this, businesses should look to solutions that are able to backup often to limit the amount of data lost in the case of a ransomware attack.
3. Downtime can cost businesses big money
According to the Ponemon Institute’s January 2016 Cost of Data Center Outages report, the average cost of IT downtime is nearly $9000 per minute. While this is already a lofty sum, considering systems can be down for hours upon hours depending on recovery time objectives, some companies can end up paying much more depending on the nature of the business. For example, according to Forbes, Amazon.com’s 2013 IT outage cost the company $66,240 per minute. Being down for about 30 minutes cost the business nearly $2 million altogether. Or, for a more recent example, the high-profile outages of a few airlines earlier this year damaged the companies’ reputations and finances.
4. Simpler is better
When you are placed in a disaster scenario, simplicity is critically important to getting systems back up and running quickly. Complex and disjointed backup and recovery solutions are always going to be difficult to deal with in a real-life disaster situation. When the business is down and IT is scrambling to get the data center back online, the last thing you want to be doing is wrangling with a complicated, manual recovery process. Plans that involve large amounts of predefined settings and automation may be best suited to ensure the correct data is safeguarded in the event of a disaster.
Knowing the importance of disaster recovery, IT teams have to really look at the needs of the business and the expectations of the company to make sure that data is protected no matter the situation.
Jesse St. Laurent is vice president of product strategy at SimpliVity