The cloud’s here to stay. Whether for hosted applications or used as the single platform of choice, the cloud has been firmly embraced by enterprises. The public cloud is also growing in popularity - 451 Research found in a recent survey that the public cloud will be supporting the majority of enterprise level workloads by 2020.
At the same time, the way enterprises are implementing the cloud is also diversifying. Survey findings from the likes of RightScale and ESG have established that a significant majority of organizations are adopting a multi-cloud strategy. Looking at some of the benefits in doing this, it’s no surprise this is taking off – enterprises are able to minimize risk, but remain agile and accelerate their time to market. Backing up to the cloud enables them to focus on what’s coming next, rather than being distracted managing their data.
The drivers of diversification
As reliable as public cloud providers such as Amazon, Google and Microsoft may seem, there are times when they have outages. While rare, there are examples of every major provider having a discontinuity of service in the last year. For many enterprises, even the potential of that happening, and them losing access to critical applications and data, is too much to leave to chance.
What these outages highlight is the fact that enterprises are responsible for their own data protection. As comprehensive as service level agreements can often be, they will only guarantee network availability, or the durability of infrastructure – rarely customer data or its availability. These elements often fall under what’s called a shared responsibility model.
Shared responsibility models differ between cloud providers, so executives need to make sure that they understand the details and specifics of each. Microsoft’s, for instance, states that any on-premise, IaaS and PaaS use does not include a data responsibility for them. Microsoft’s SaaS agreement does cover data access, but only for a maximum of 30-60 days. Beyond that, companies must find another method. Amazon is very clear on where responsibility lies, completely separating itself from any responsibility or accountability for security or if data is lost or modified.
The goal of every enterprise is getting data management and protection procedures under control by introducing cloud-based methods that will maintain availability around the clock. Regardless of the approach, the ultimate goal of any storage strategy is the availability of data. When data is unavailable, lost or unprotected, there’s a huge price to pay.
There was a time when a once-a-day backup was sufficient, either to tape, disk or to the cloud. But so much data is now being created and modified in a given day that backup needs to be a continuous activity. The growth of multi-cloud strategies has also meant that data has become sprawled across multiple clouds, databases and devices, adding further complexity to the backup challenge.
With both the speed of data creation and its sprawl continuing to increase, enterprise IT teams are easily overwhelmed. So what do some of the steps to successfully addressing this challenge look like?
- Backup – Any effective strategy needs to master the basics, but far too many organizations struggle to capture backups that are actually usable in the event of outages, attack, loss or theft. APIs can really come into their own here, helping manage data ingestion and triggering proper protection.
- Cloud Mobility – The ability to provide workload portability across various clouds and different platforms. Cloud mobility is critical for organizations who need to maintain speed and control of their data in a multi-cloud environment.
- Aggregation –The key to this step is a singular, extensible platform for delivering availability to make sure that all vital data is protected in an increasingly multi-data center, multi-cloud environment. It’s at this step that insight into the data can begin to be gained.
- Visibility – The value of data aggregation is limited without the accompanying visibility. Teams should be able to see where all of their data is currently being protected and stored in one place. In doing so, they’re able to become much more proactive in their resource monitoring and allocation. Visibility of this kind then also makes the ability to dynamically create isolated instances of protected servers for disaster recovery, dev-ops, patch and security testing or compliance possible.
- Orchestration – Once the data is visible, the next stage is being able to optimize it. The orchestration stage is all about harnessing the data sprawl that businesses are increasingly seeing. The ideal here is to be able to move data seamlessly to the best location at any given time, in order to optimize resource use as well as ensure continuity of service.
- Automation – While ambitious and aspirational in nature, this final stage is the next frontier in data management, enabling data to be self-managing by learning to back up and migrate to the ideal location(s) according to business need in real time, to secure itself during anomalous activity and to be instantly recoverable.
Diversify, but keep the control
We live in a hybrid world, and enterprises require a scalable platform that can handle multi-cloud and on-premise environments alike. By diversifying data protection across various public clouds, enterprises can ensure greater protection of its data and focus on what needs to be front of mind next.
However, business leaders still need to appreciate that the data is still primarily the responsibility of the owning organization, not of the cloud service provider. While there are infrequent outages, enterprises must continually remind themselves that total responsibility for data availability lies in their hands.
The cloud’s here to stay, and by implementing the right kind of data management functionality over their systems, enterprises can give themselves more control of the movement and handling of data and resources.