Data backup is one of the most critical, resource-intensive operations organizations can undertake – taken to the extreme, even a single failure to backup or recover when needed can spell disaster for a business and its reputation. In 2017, GitLab was considered lucky when a backup failure only resulted in around 300GB of lost data affecting comments and bug reports – but the incident was widely covered in the media and still remembered well today.

While some organizations might have relatively simple backup needs, larger enterprises will have complex and diverse IT environments, with a mix of both legacy and modern technologies in use. Backing up these often disparate and complex environments presents a significant challenge. However, complexity isn’t an excuse for failure and organizations must always be able to guarantee the retention of their data.

cloud
– Nutanix

The complexity conundrum

Large enterprises do not stand still. They invest in multiple cycles of new technology over the years in order to keep pace with new developments, as well as going through mergers and acquisitions. The result of all this change is disparate and fragmented IT environments, which inevitably become more complex with each new technology cycle or acquisition. These organisations often have hundreds, if not thousands, of applications in use, delivered from a range of infrastructures, from on-premise mainframes to cloud environments – all from a mix of different vendors and service providers. Inevitably, this complexity is mirrored in organisations’ backup environments.

As such, a large amount of resources are often dedicated to managing backups and ensuring all necessary data is retained for the right period and available at any time. The scale of this task often means significant management overheads as the size of the task grows and the skill level necessary to diagnose and solve problems that cause backups to fail becomes ever greater. In recent years, cloud and SaaS have been added to this increasingly complex mix. While organizations adopt cloud for a number of reasons – from reducing costs or simplifying operations, to improving scalability – they need to be aware of the consequences for backup.

In particular, the ability for Infrastructure-as-a-Service and Software-as-a-Service to spin up and down and for data to move between different clouds as needed – often without the IT department being immediately aware – means that backup processes must be able to cope with a much more dynamic environment.

Dangerous expectations

There is still a worrying tendency for organizations to expect too much from their cloud service providers – especially from Software-as-a-Service. Often organizations assume that when they move to a cloud service their backup needs will be met by the service provider. However, this is a risky assumption to make.

For instance, Microsoft Exchange online (part of Office 365) offers 90 days of backups by default. For many businesses in a regulated industry or with more stringent processes, who might typically hold data for many years or even indefinitely, this does not even come close to what they need. If such an organization adopts cloud services without considering the backup implications, they could find themselves in deep water as critical data isn’t being kept in line with their obligations. The fallout can range from being unable to retrieve important data through to falling foul of the regulator, facing fines and possibly even suffering reputational damage.

Preparing for the jump

The only way to cope with the additional dynamism and complexity the cloud introduces, and to avoid falling into the trap of assuming data is being backed up and stored as needed, is to meet the challenge head-on. Enterprises need to embrace their responsibility to understand the implications of this transition, and how it will impact their backup needs. They can then ensure that their approach to backup can evolve to meet the demands of the cloud.

This will not be simple. Organisations need investment time in researching the cloud technologies they are using, so that they understand the exact impact new services have on their backup standards, and how to adapt to them. They need the right processes in place to ensure that those standards are applied consistently across their environment, that backup is always performed in line with their policies, and that data is kept for as long as it is needed.

They may also need new technologies to enable cloud data to be protected in line with their standards where the cloud provider’s own offering falls short. Finally, they will also need the right skills so that, when incidents arise in the many environments that data lives, they can be swiftly rectified. If enterprises do not have all of these components in place, then they must either accept an elevated level of risk, or make addressing it an investment priority.

The ever-increasing complexity of backup environments may be daunting, and the added complexity brought by the cloud doesn’t help. However, it’s not a task that organizations should sweep under the rug or assume will be dealt with by others. By investing in the knowledge, technology, processes and skills they need, enterprises can buy themselves peace of mind.