In our work at Romonet we have seen that the large enterprise operators are beginning to realise that what they need to do is to begin categorizing their data center capacity.

Does the DC need to be in a particular geographic location because of regulatory or latency issues? An example would be a bank making high-frequency trades into a specific exchange and so needing to be physically close to it, along with regulatory requirements that bound you within a specific area. In this case, this will be the reason the enterprise selects capacity, either building a DC or renting space, in that required location.

The next major reason is resilience and availability; the thing a lot of enterprise operators are finally learning from cloud operators is that 2(N+1) with restrictive environmental controls legacy ‘no single point of failure’ designs are not really appropriate for a lot of the new platforms that are being operated. Consequently, a lot of people are now building much more cloud operator style environments, particularly given the fact that most of the virtual servers that they buy and operate their applications on are in N+1 loose environmental controls cloudy environments.

Another key element (and we see people getting this wrong quite a lot), is failing to correctly manage the utilization in each location, especially in cases where they own a site or lease a fixed capacity in a site.

As an example of this, what we frequently do for operators is we cost the capacity used giving a unit cost per delivered IT kWh from their data center to their IT equipment. Below is a sample cost output:

 

These figures are based on the amortizing capital cost in the assets you own, staffing and maintenance costs, and the energy costs. What we also show here is a couple of colocation contracts; the big dark green column on the right represents one of the legacy style ‘pay for circuit capacity irrespective of what you use’ contracts, which don’t manage your utilization very well and a lot of legacy enterprises still use. The colo market has moved a long way and you now have the metered colo option where you pay less for the capacity that is allocated to you and more based on your actual demand and draw (this is the second column from the left). We also show some of the base operating costs that can be achieved by building or owning your own capacity in different places. Once you’ve categorized your data center space your old colo may be worth paying for because it needs to be that particular location but it’s likely that a lot of the platform you run doesn’t need to bear that cost.

Keep your own capacity full or turn it off completely – if you do have expensive colo for instance, don’t just run it down. As pointed out earlier, some IT services need to be in a specific location and colocation can be good for this. Some IT services may require owned IT equipment, so place these in owned, leased or colo DC capacity. There is a size element here too – realistically, below 500kW to 1MW you should lease capacity in a larger site as it is unlikely you can be cost competitive unless you have very special requirements.

Enterprises are beginning to realise that most IT services work fine on public or private ‘cloud’ platforms, so put these where they are cheapest. In which case you can just fill the data center space that you own with cheap, commodity generic servers and then use cheap, commodity generic servers by the hour in other people’s data centers by renting them, so keeping the majority of your IT estate wherever it’s the cheapest this month, this week or even this day.

In my next blog I will look at the existing assets you own and the options available to you. You may be surprised at some of our findings.