Stand by for disruptive change. As demand soars, capacity will be provided by both colocation and cloud infrastructure – often in the same facilities.

The primary marketing value proposition of colo owners used to be “your mess for less”, but that is now is over and done with. Dead and gone.

colocation fuiture tall
– Thinkstock / Anna Omelchenko

Multiple issues

Instead, the buyer has to consider multiple issues, with a set of colocation and cloud services decisions that have grown in number, importance and complexity.

If I am an enterprise IT shop, I can virtualize my data center footprint and get massive advantages through consolidation into a smaller and more compact set of IT resources. But when I consolidate, am I altering the performance above the rack of my enterprise applications? And below the rack, am I making different physical and electrical demands of my data centers?

Do I need higher density? Do I want to use lower-carbon power? Or do my applications and customers now need the responses of network edge capabilities?

Virtualization has been underway for 10 years, but it is nowhere as mature as we think. It has yet to be optimized in the vast swathe of enterprises. This leaves businesses with a set of questions to answer, and colocation providers that are ready to get involved in those decisions will have a ready-made advantage.

Any organization must consider whether it has the capacity to meet its current needs, and be sure that what it has is flexible enough to scale to meet any new requirements (or scale back for other business changes).

Enough of rigid. already

A colocation provider that locks a customer into any level of custom may be an unfortunate reminder of the rigidity of the in-house facility that customer is trying to escape.

Whether the resources are in-house, at a colo or in the cloud, they also have to operate efficiently, so the business is cost effective but also meets any regulations or energy-efficiency aspirations.

And then there is reliability. Everyone needs backup and disaster recovery. The cloud and the in-house data center provide those differently, and the colo can pick the best of both worlds.

You must be ready for hybrid and multi-cloud, and have capacity at a moment’s notice 

Finally, the customer’s very business model and organizational structure may push it to one model or another.

In all this, no organization is a monolith. Some workloads can be run quite easily in public clouds, while others need (or are perceived to need) in-house resources.

This decision should be informed by what expertise is available, both in-house and at the service providers, but it should not be absolutely driven by those skills. Don’t build in house just because you have the staff – that’s no different to putting the cart before the horse.

New workload characteristics will probably have to be internet-facing to handle the shift in customer behavior. They will very likely also be more splintered into many app workloads that tend to be more “spiky” in nature.

A new network

All this may push the customer to a different kind of network service than the core network capabilities of their own in-house services, but may also rule out many of the numerous service providers wanting to talk to them.

Colo operators will find that their customers will vary widely in their business needs. It will be difficult to decide in which sectors, and with what scale of organization, they can best compete.

And just when they think they have adjusted to the new business environment, and understand their customers’ needs, the landscape will change again.

Despite the difficulty in assessing what to do, the key factor in any colo’s business decision will be speed. Time-to-market velocity and agility will rule.

At the same time, of course, and creating a tension with the whole idea of moving quickly, security and risk will also be front-of-mind for buyers.

Cost is critically important, both in terms of return on investment (ROI) and total cost of ownership (TCO), but it can be trumped by long-term business advantage.

Owner/operators shouldn’t obsess about their own issues but help customers think through their own issues. They should bite the bullet and become part of a transparent sourcing brokerage. That sounds like the end of any control of your margins, but if they don’t go that way, they will be left out of big deals, where most of the future value lies.

Classic issues

Under all this, the classic issues remain for the CEO of any colo franchise. Site selection is the most basic jumping-off point. As the famous real estate trope puts it, the most important factors are location, location, location.

In the data center industry, this is true of building or converting, or rebuilding fresh capacity, but it is also just as true when considering acquisition. The real estate value is likely to be the least important criterion. Power prices, sources and availability will be more important. Taxes, labor costs and local regulations will be vital too. We will all have to consider power sourcing and the carbon weight of our energy.

Having bought into that location, the colocation provider then has to sell that location to the customers, and this means convincing them that this location will accelerate and secure their digital transformation.

So investors and developers need to understand the regional market for on-premises data centers. How many potential customers are there, and what are their pain points and long-term plans?

Colo backers also need to know what the competition is doing: how much colo capacity is there, and how mature and flexible is it?

They also need network capacity to link to the outside world, and to offer the streaming content, interactivity, big data and analytics features that customers will want.

They must also handle thermal management, running the site cost-effectively without harming resiliency and availability.

For both power management and cooling, colos will need underlying DCIM and DCSO, presented and packaged to the customer’s advantage.

Uptime Institute Tier Classification could be important for branding and to keep the customer comfortable. Without it, the provider will have to explain and justify an equivalent availability standard in the face of increasing customer sophistication.

colocation fuiture tall
– Thinkstock / Anna Omelchenko

Multi-layered workloads

Workloads will be sourced and managed in an increasingly complex and multi-layered fashion. But organizations – both the colo and the end customer – will be expecting ever-increasing productivity, and this means staff will have to handle more virtual workloads.

This adds up to a requirement for simpler management. All these resources, whether in-house or in shared premises, must be automated and easier to manage.

Cloud and colo bodies will have to co-operate with open and private exchanges, and professional sourcing advisories and analytics.

If you’re the enterprise CIO, you can now upgrade capacity while off-loading the capital costs, but you must be ready to understand and exploit that world.

If you are a colo operator, you must be ready for hybrid and multi-cloud operations. You must have capacity ready to spin up at a moment’s notice, for just-in-time projects like devops, bare-metal builds and hyperscale open-source.

It’s a brave world – if you can handle it.

This article appeared in the May/June issue of DatacenterDynamics magazine.