The move from on-premise to the public cloud seems impossible to stop. Many organizations continue to take advantage of the storage and applications provided by hyperscalers on the basis that it is meant to be far more cost-effective and flexible. However, is this truly the case, and is it really the most secure approach for CIOs and IT teams to consider when building their IT environments?

Often a cloud approach comes with hidden additional costs for security and storage – with so many companies being caught out, facing these issues. Further, there are latency problems and vendor lock-in is real. These issues are, therefore, causing organizations to miss out on the so-called operational flexibility that has been promised – and they are encouraging people to change their minds about public cloud strategies, with many starting to perceive the cloud as expensive and risky.

In an IT world where cloud has been heavily promoted, what is the other better option though? On-premise is an answer to security and latency concerns that should still be considered. However, not many companies are in a position to contemplate major capital investments in new hardware or rebuilding their in-house IT teams. So how can organizations reduce costs, while improving security, and hold onto the benefits of flexible, usage-based pricing?

Public cloud service costs are typically double the cost of on-premise

Organizations of all sizes, across private and public sectors, have bought into the idea that a shared IT infrastructure offers better value for money than a dedicated, on-premise setup. Consequently, the UK cloud computing market is worth £7.5 billion ($9.49bn) - and its domination by just three vendors is being investigated by the CMA. But, this review, designed to address concerns raised by Ofcom about exit fees, the lack of flexibility, and the structure of financial agreements, while valid, is a distraction from the biggest problem: the cost of public cloud services is typically double the equivalent on-premise set up.

Every single organization using one of the big three hyperscalers is effectively paying twice as much as they should for essential IT systems, including storage and application hosting. Worse yet, they are paying for a service that is drastically less secure and generally less well-supported than an on-premise alternative.

Security is becoming a real worry for businesses dependent on the public cloud. By default, the dominance of the big three hyperscalers makes them a prime target for hackers. Distributed Denial of Service (DDoS) attacks on these organizations are occurring almost continuously, creating huge security vulnerability.

Not only can a DDoS attack prevent access to key services, causing serious operational issues; but, more dangerously, expose vulnerabilities in the security posture that can be used to access critical data. So why are organizations still choosing to pay a fortune for services that are less secure and less flexible than an on-premise option?

Unseen, unexpected costs

The cloud model is typically appealing to many organizations because of the shift from capital expenditure (capex) to operational expenditure (opex). The notion that costs are known, with a set monthly subscription, is appealing. The option to scale up and down in line with demand is compelling, especially when compared to the challenges of setting up new servers within traditional on-premise models. However, it is the hidden costs of the cloud that have caught so many companies by surprise.

The hyperscalers’ financial calculators look straightforward; but buried in the fine print is the information that every additional slice of service and support costs more. The extra – and much needed – security, costs more. Storage cost models are unclear too: the promised price per terabyte looks fantastic, until a company discovers it is being charged not only to store data but also delete it. Uploads are free but an organization is charged for every object downloaded. The monthly bill can often be two, even three times the expected amount. These issues cause holes in planned budgets.

Add in the limitations on bandwidth, the additional charges for CPU or RAM, plus the fact that if the business is using VMWare, it will be paying again based on those same usage factors, and it is little wonder that the cost of the public cloud has far exceeded any CTO’s original expectations.

Service Integration and Management (SIAM)

So, how can businesses achieve the necessary level of security cost-effectively, without reverting to large and unaffordable capital expenditure? The answer: take control back and bring equipment back on-premise – while also retaining the benefits of cloud technology, including remote support and flexible finance and usage models to meet operational requirements.

A growing number of Service Integration and Management (SIAM) companies have recognized the fundamental issues associated with public cloud services. They are offering a ‘back to the future’ on-premise model with essential flexibility. Servers can be spun up on-premise as needed, with costs linked to usage. Support is included and, by moving back on-premise, the security risks are allayed.

For any business worried about the need to rebuild a server room or employ dedicated tech experts, neither is an issue. The latest generation of servers can be run at higher temperatures, which means there is no need to recreate the air-conditioned server rooms of the past. The servers can simply be located within existing network rooms or offices. Or, if the business lacks space, the entire system can be securely co-located within a dedicated and locked rack. Tech support is included as part of the service, with providers using the remote, open source technology used to deliver cloud services to cost-effectively ensure the on-premise systems are working effectively.

Future-proofing

Bringing this important infrastructure back into the business is not only cheaper but inherently more secure. Instead of an open, public access model that is sought after by the large hyperscalers, an on-premise set-up takes the opposite approach: everything is locked down first, with access opened up only as needed using highly secure tunnels to safeguard the business. Moreover, since the whole private cloud set-up is owned by the company, any required security changes can be made instantly. There is none of the interconnected public cloud risk that has led to devastating, extended attacks across key public services in recent years either.

The ability to regain this amount of control is encouraging increasing numbers of organizations across public and private sectors to actively bring data and systems back in-house. These organizations typically have serious concerns about data security.

They are also unhappy about the growing latency problems associated with the additional layers of security the hyperscalers are putting in place to be secure – an issue that disappears when systems are on-premise. Further, there is an acknowledgment that a dependence on the public cloud adds operational risk: any interruption to the internet connection leaves an entire organization unable to operate.

Times are changing yet again. Sure, the public cloud has its place and is great for hosting a website or public-facing apps.

However, as organizations start to realize that their IT infrastructure deployments could be cheaper and more secure with an on-premise set-up, many organizations’ IT teams are starting to go “back to the future”. Many are choosing to take control of their systems and costs and deploy an on-premise private cloud infrastructure instead.