We all know the old saying: no one ever got sacked for buying IBM. Well today, many organisations appear to have the same attitude towards the cloud providers, as they look to shift their compute requirement away from in-house facilities and are buying into the one-stop cloud platform for their computing requirements.
Who wants the cost of big ticket items?
However, removing the big ticket technology items from your organisation’s balance sheet, and pushing the vast majority of your applications and data into the cloud may be a premature and overly expensive solution in the longer-term. These days, organisations are leveraging more applications to stay competitive in their markets and for many they are finding that not all of these are best suited to the public cloud.
The main cloud operators, companies such as AWS, Microsoft and Google, to name just a few, have worked hard to position their platforms as easy to implement, simple to scale and reasonably priced. The growth of artificial intelligence and machine learning applications presented the cloud with an opportunity to promote itself as the go-to solution for data-heavy compute requirements. With low cost start-up contracts, many organisations were attracted by the massive compute capability on offer, without properly investigating the scale-up costs or the difficulty of switching contracts once budgets were stretched.
Many customers of cloud platforms learnt the hard way and, as a result, cloud repatriation is on the rise. According to IDC, in the report Increased Services, Pullback From Public Clouds Huge IT Disrupters, cloud repatriation has grown increasingly popular in recent years with 80 percent of companies planning to repatriate at least some of their workloads that are currently hosted in the public cloud.
A key reason for the change in mindset is the diverse workloads that come with multiple applications, which can be highly complex and have unique requirements for server instances, storage volumes, as well networking, power, heating and cooling and not forgetting physical location.
Blurred lines
The lines are becoming blurred between cloud and data center with more opportunities for organisations to site workloads where it maximises the benefit of the data compute. Many organisations now view a hybrid approach to data compute capacity as essential to their developing strategies. And now many colocation providers are investing in ways to help tenants improve efficiencies between public and private cloud workloads and controlling costs and meeting SLAs.
Energy costs relating to cooling account for around 37 percent of overall data centre power consumption and it has become increasingly important for the sustainability credentials for data centers of all persuasions to institute pathways to increasing energy efficiency across their sites. However, converged infrastructure solutions reduce time to production for tenants by up to 80 percent by using pre-configured solution that are fully tested, validated and fast to deploy.
Automated environmental monitoring within the technology space and often implemented with converged infrastructure, will help improve cooling and equipment efficiency as well as increase operational effectiveness, which can help reduce overhead costs, often quite considerably.
Interconnection to partner ecosystems
The pandemic has demonstrated to senior management and technical teams that virtual business practices can be effective. Quick to implement this are colo tenants and providers that have developed new ways to engage at each stage of the relationship cycle.
Colo is fast becoming the destination for connecting enterprises, service providers and cloud platforms. Interconnection services are the physical connections that enable data exchange between two or more partners at the fastest available speeds by combining high-performance networks with physical proximity. Leading colo operators now offer tenants leading interconnection services to streamline migration across facilities as well as providing easy access to partner ecosystems.
For latency sensitive applications distributed architecture provides the deployment flexibility to support the highest spec customer requirements. This level of connectivity in place means deploying compute and storage across multiple locations, that most suit the application and cost requirement, are not only possible, but essential.
Colo success lies in finding ways to help tenants improve efficiencies between public and private cloud workloads while controlling costs and meeting SLAs. Hybrid cloud architectures provide the flexibility organisations need to allow application workload requirements to determine where they should run.
Inevitably sustainable
Sustainable practices have become important touch stones in boardrooms around the globe, as well as with customers, investors, governments, and the public. Colos are rising to the challenges with numerous sustainability initiatives. Customers want to be associated with suppliers that can demonstrate positive environmental policies and actions. Many data centres have negotiated renewable energy utility contracts and other off-setting policies to reduce their net-carbon emissions.
Physical network infrastructure is a strategic foundation that helps future-proof colos in a hybrid compute environment. It provides the solutions that ensure smart, scalable, and efficient connectivity as the platform for colos and their customers to compete and succeed in this hybrid global marketplace.