For all the hype about moving applications to the cloud and making legacy apps “cloud-native,” those of us in IT have a poorly-kept secret: legacy systems are alive and well –– and they’re not going anywhere anytime soon. Though the cloud promises the cost savings and scalability that businesses are eager to adopt, many organizations are not yet ready to let go of existing applications that required massive investments and have become essential to their workflows.
The process of rewriting these often mission-critical apps for the cloud typically ends up being lengthy and expensive, with unexpected problems that vary from company to company. Some of the challenges an organization will face when rewriting applications include:
1. Latency Issues – Existing applications need fast data access, and with storage infrastructure constantly growing in size and complexity, latency increases as the apps get farther away from the data. If there isn’t a total commitment to moving all data to the cloud, then latency is a guarantee.
2. Mismatched Protocols – Legacy apps leverage standard protocols like NFS and SMB for network-attached storage (NAS), which are incompatible with object storage, the architecture most commonly used in the cloud.
3. Time Investments – Writing new applications to cloud standards makes a lot of sense, especially if those applications represent new pipelines or workflows that aren’t in the critical business path. However, re-writing something that has been running for years will inevitably uncover unforeseen dependencies and expectations, turning a simple re-write into a major project.
For many, migrating to the cloud is daunting. Legacy systems offer user familiarity and time efficiencies that many professionals are simply not yet ready to sacrifice for new technologies with which they have very little experience. In essence, using existing applications just makes sense for most enterprises — and you know what they say about messing with something that isn’t broken.
That said, nothing is stopping you from moving the applications to the cloud as-is. While enterprises may still choose to develop a plan that includes modernizing, you can gradually and non-disruptively move key application stacks while preserving your existing workflow.
App rewrite redux not needed
When moving a legacy application to the cloud, the immediate impulse is often to start from scratch, rewriting the code. However, once IT teams begin to evaluate the process in more detail, they discover that these projects are often staggeringly complex, sometimes requiring years to complete. Not surprisingly, this means that the costs related to rewriting can quickly soar well beyond the initial estimates.
It’s also not just the programming hours required to complete the task. Critical issues factor in as well, including availability, data migration, disaster recovery, governance, implementation and security. And, don’t forget the distinct possibility that once the project is complete, the rewritten applications won’t work as well as they did before the undertaking. More likely, you will see little value in rewriting something for the cloud that was never meant to operate there. For example, an analytic toolset running against specific data files that are gathered the same way every day should not be re-written unless some distinct commercial/cost reason exists. On the other hand, there’s no reason to write an IoT analysis tool using old technologies; you should consider those cloud-native.
But what if the legacy applications didn’t need to be rewritten to take advantage of the cloud? In fact, available linking technologies mean most legacy applications don’t need to be rewritten. Applications that have traditionally worked with files and directories can continue running in that fashion, except now you can run them on AWS EC2 Compute or Google Compute Engine virtual machines. The main challenge is providing access to the data.
Cloud bursting eliminates the need for the application rewrite
The data access protocol is another potential stumbling block when wanting to migrate legacy applications. Many applications developed since the 1990s have been written to depend on NAS. Most of these applications need traditional protocols such as NFS or SMB/CIFS to communicate with storage, but in the cloud, file services are not widely available as part of the cloud provider’s offering. Today’s cloud object storage depends on a Representational State Transfer (REST), which uses Web standards to communicate among servers.
Hybrid cloud technologies are now available that allow legacy applications to run on servers with their original protocols while communicating with the cloud. Cloud bursting solutions allow these legacy applications to run in the data center or in a remote location while letting them use public cloud compute resources as needed.
With cloud bursting, the majority of the data can stay on premises, reducing risk and minimizing the need to move large files, thereby saving time. Not only does this make life easier for IT, cloud bursting can mean faster time to market, paying only for what is used, retaining focus on the core business as well as providing improved financial agility.
Data recovery in the cloud
Because many legacy applications are mission-critical business apps, downtime is prohibitively expensive and can make for unhappy customers, employees and other stakeholders. By using cloud snapshots or more comprehensive recovery from a mirrored copy stored in a remote cloud or private object location, the needed data is accessible and recoverable while using less expensive object storage options.
At first blush, it can seem like legacy applications and the cloud are like oil and water – not easily mixed. But as discussed here, that needn’t be the case. Many technologies and solutions now available allow for the functional and highly efficient coordination and connection between legacy applications and the well-known advantages of the cloud.
Scott Jeschonek is director of Cloud Solutions at Avere Systems.