Over the last five years, innovations in software-defined storage, hyperconverged infrastructure (HCI), and cloud computing have ushered in an era of IT accelerating business growth. According to Gartner, the growth of cloud and industrialized services and the decline of traditional data center outsourcing indicates a massive shift toward hybrid infrastructure services – a market that is estimated to reach USD $91.74 billion by 2021.

But as infrastructure and operations leaders look ahead to a future of many clouds, new challenges have emerged. With various integration points and supplemental tech from shadow IT, organizations no longer have seamless control of all applications and lack the end-to-end solutions necessary to truly drive cloud innovation.

As we face a looming crisis around a rapidly-changing cloud landscape with unprecedented management and cost complications, the industry must evolve with new innovations to bring order to this reality.

Managing multiple clouds is complex

Multi-cloud
– Thinkstock / Ig0rZh

Hybrid, or multi-cloud management, is in its early stages of development, and all signs indicate that its complexity will continue increasing over time. IDG Research Services recently found that nearly 40 percent of organizations with some type of public cloud initiative had moved at least some of those workloads back to on premises, mostly due to security and cost concerns. A trend is now emerging of enterprises becoming more judicious about what is moved to the heavily-metered cloud.

Attempts to define a hybrid cloud management strategy often fail to understand the profile of workloads; an effective strategy must address what the applications do, including how they interact with the end users, manage data, and handle networking, security, governance, performance and more.

To address this complexity, IT needs decision-making tools to help easily decide which application should be moved to the cloud or remain on-premises, with complete visibility and management across the entire stack.

Private and public clouds lack sophisticated interoperability

A second harsh reality faced in a multi-cloud environment is interoperability. Private and public clouds all come with their own native APIs and resources, and manage storage, networking, provisioning, and security differently. The burden falls on the customers to learn the native interface for each private and public clouds. But without sophisticated APIs, applications fail to connect seamlessly. And more than just a connection is needed; complex workloads demand an intelligent API that evolves and learns alongside the various environments.

As artificial intelligence technology evolves to bring greater sophistication to the data center, interoperability must be addressed soon to support the world of many clouds.

Data movement is limited

A third challenge is moving data to and from the Cloud, creating a nightmare of logistics, cost and licensing confusion. Right now, it’s difficult to move workloads around due to separate user interfaces between the cloud and on-premises infrastructure. Users must decide up-front which workloads belong where, but then later discover the workload is in a less optimal location – or is orphaned altogether. Even worse, licensing models are confusing. It may be cheaper in dollars-per-VM to run a virtual machine in the Cloud, but users could end up paying double for the licensing fee.

To thrive in a multi-cloud world, IT needs more transparency and tools from vendors to support the right decisions up front about location, cost and licensing models. More sophisticated integrations are needed to accelerate data movement between clouds, coupled with analytics to ensure applications are in the location that provides the appropriate level of performance and cost.

Envisioning a future of autonomous cloud computing

Consider the current reality of an IT leader trying to monitor performance on Google Cloud Platform, OpenStack private cloud and Amazon Web Services. To simply and effectively get the most out of each platform, the individual would need a single management system that translates the nuances of each platform around the user’s specific performance expectations. The system would need to decide where non-critical and mission-critical applications should live, making any moves necessary to get them into the most cost- and performance-optimized solution to meet pre-set SLAs.

I believe the future is cloud resource orchestration––automating private or public cloud resource provisioning to achieve quality of service (QoS), SLA and security objectives. This requires dynamic provisioning so the system understands and interprets policies and requirements so that if an application exceeds preconfigured limits, new limits are automatically established, allocated and reported on in real time – much like a utility.

The industry is just around the corner from truly intelligent and autonomous cloud management, with a single software stack using automation, artificial intelligence and machine learning to manage all applications and data across private and public clouds. Despite the many harsh realities of our current multi-cloud world, innovation in cloud automation and optimization solutions will continue to emerge and propel the business into a new era of opportunity in the Cloud.

John Spiers is executive vice president of strategy for storage and hyperconverged infrastructure vendor Pivot3