In the last few years, an increasing number of organizations have moved their Tier 2 and 3 applications from local data centers (on-premises) to the cloud (off-premises), with many committing to cloud-native applications.

Until now Tier 1 business-critical applications have been firmly on-premises. But many organizations are re-evaluating the decision not to move these applications to the cloud, more specifically high-performance databases like Oracle. There are many catalysts for this cloud migration, not least supply chain shortages and cost-cutting imposed by the economic climate. These, added to the bottom-line benefits the cloud can deliver from agility and flexibility to efficiency, could lead you to assume that all IT teams would choose off vs on-premises. Why then, do some organizations choose not to migrate?

When I talk to cloud or infrastructure architects, they sometimes hesitate when it comes to migrating high-performance database workloads like Oracle to the public cloud. And when I probe a little further, it’s clear that there are concerns still lingering about migrating such workloads to the cloud that are either no longer an issue or that could easily be mitigated. Given the technologies available today to address these challenges, these ideas have now turned into myths rather than valid concerns.

Most IT architects think that there are still compromises, that if they migrate high-performance applications to the cloud they won’t get the same capabilities as offered by their on-premises SANs, or that the cost will be the same or even higher. And that can be true, especially if you pick an off-the-shelf native option. But there are alternatives.

I’m here to set the record straight - and help you to prepare for cloud transformation so that you can be confident you’re making the right decision for your business-critical workloads.

Myth 1: The cloud doesn’t offer the high performance, low latency that database workloads require

Truth: there are non-native cloud solutions that offer the high performance, consistent low latency needed for faster applications. And for an organization, this can mean accelerated data analysis, business intelligence, product innovation, and a better customer digital experience, just for starters. It is a case of looking for the right fit for the workload.

There will be an option that meets these requirements, but it means looking beyond the native public cloud options that are presented to buyers as the best on offer.

Some software-defined storage solutions available today, for example, are capable of delivering equivalent performance to local flash and with consistently low latency when provisioned with storage-optimized instances. In fact, some solutions can deliver up to 1M sustained IOPS per volume and low <1 msec tail latency: this is compared to native public cloud storage solutions that top out at 260K IOPS. To put this into perspective, it’s highly unlikely that your application will need more than 1M IOPS; this is more than you’ll ever need. But there’s peace of mind in knowing that that level of performance is available, and that provisioning it won’t break your budget.

Myth 2: The cloud doesn’t offer the SAN capabilities needed for data workloads

Truth: a number of vendors have developed software-defined storage solutions that deliver the rich data services you’re used to with your on-premises SAN: you just need to know where to look for them.

Bringing added benefits to the business, including data security and disaster recovery, SAN capabilities are crucial. But although cloud infrastructure has much of what’s needed to migrate and run high-performance databases, natively it can lack basic SAN capabilities.

Some specialist software-defined storage solutions on the market today, however, offer features such as automation, centralized storage management, snapshots, clones, thin provisioning, compression, and more. Different vendors will focus on different capabilities, but there is bound to be a suitable product for your specific requirements that you can adopt for a highly successful migration to the cloud.

Not all platforms are configured alike though, and in some of the more popular ones, the services are available a la carte and require administrative oversight which can be time-consuming and cost money in the long run. Instead, look for options with no hassle data services that are built-in and do not require expensive additional licensing.

Myth 3: High-performance workloads can’t be run on the cloud cost-efficiently

Truth: there are numerous software-defined solutions that will allow you to scale compute independently of storage to keep costs lower and more predictable.

However, this cost myth is probably one of the biggest reasons why large-scale, high-performance database workloads haven’t been migrated to the cloud. The cost structures of most public clouds can be expensive for unpredictable and IO-intensive workloads. On most clouds, higher-provisioned IOPS will cost more per GB. Put simply: the more IOPS you provision for, the more money you pay.

It doesn’t have to be that way, however. You can turn to solutions that will keep the cost consistently low even if more IOPS are needed, allowing you to predict your costs. In addition, some of these technologies are disaggregated so compute scales independently of storage. This ability to dynamically scale infrastructure, in any direction (up, out, or in) can have a dramatic impact on cost efficiency and the ability to meet SLAs while keeping pace with unpredictable business demands.

And of course, there are costs beyond storage to consider. To determine whether the cloud delivers a better price/performance scenario than the on-premises alternatives, it’s important to take into account all the costs of managing the on-premises solutions (hardware, software, networking, data center overhead, administrative overhead, time to provision the systems, etc).

Myth 4: The cloud does not protect against data loss

Truth: hyperscale public clouds have higher durability and availability than on-premises data centers. They have built-in support for effective disaster recovery (DR) architectures with multiple availability zones (AZs) and regions, which safeguard against data loss and provide business continuity reassurance. Data services such as snapshots, clones, and built-in incremental backup and restore offer added peace of mind.

In fact, some organizations’ initial migration to the cloud is a hybrid implementation. By running their workloads on-premises and synching the data to the cloud these users have an extra layer of protection. And to migrate to the cloud at your pace and on your terms, some new storage solutions allow you to port software licenses between on-premises data centers and the cloud, so you can run your workloads wherever it makes the most sense.

Of course, the pricing structure for these services varies; look for providers who include data services in the license and free backups and restores.

Myth 5: The cloud isn’t suited to unpredictable workloads

Truth: some cloud solutions have built-in auto-scaling capabilities, so you’ll only pay for the capacity you use, not what you provision for. It’s a common practice for IT organizations to overprovision compute and/or storage to ensure business continuity when unpredictable workloads arise. This can often cause costs to increase.

To mitigate this, some organizations burst their workloads to the cloud, a hybrid implementation where cloud resources are provisioned as needed to accommodate spikes in demand and then decommissioned when no longer required. Other organizations migrate their workload to the cloud completely to take advantage of the cloud’s dynamic and automated provisioning capabilities.

But, if they're using native public cloud storage, they may still be paying for what they provision rather than what they use. What they should be looking for is a solution with built-in auto-scaling capabilities, where they’ll only pay for the capacity actually used. These are available, but you do have to look beyond the native offering. The bottom line is, you no longer have to anticipate how much capacity or compute resources you're going to need. You can configure your cloud infrastructure to monitor demand and auto-scale as needed for maximum cost efficiency. This will guard against overprovisioning - while still providing enough resources for your workloads. This can be a game changer for many organizations where the volumes they need to provision can be significantly higher than those they end up using. The savings here can make the migration to the cloud an easy choice, one that will allow them to divert budget to other projects that were, until then, not viable due to lack of available funds - and administration times.

Technology evolves at varying speeds - what was true a few years ago, may no longer be the case. Always looking for new developments that may be a fit for your data center or data strategy overall, can make a significant difference to your organization’s ability to bring products to market quicker, remain competitive, and increase profits.

As Sir Francis Bacon famously said "knowledge is power" - and so, with these myths debunked, my final piece of advice to you is to understand your workloads. Public cloud infrastructure now provides everything you need to migrate your high-performance databases away from your premises. While the compute and storage domain might be unique to each cloud service provider (think AWS, Azure, and GCP) it’s easy to educate yourself on the nuances for each platform. But it’s only by knowing what you need, that you can choose the right platform provider. And it’s only by really understanding your workloads that you can know whether it’s time to migrate to the cloud, and which provider offers what you need to make the migration a success.

Subscribe to our daily newsletters