The speed and resilience of a data center is something we obsess over - making efficiencies and improving performance is what makes working in IT so satisfying. We live for the next tech breakthrough that can process more, store more, or just work harder on less power. Luckily, with Moore’s law (well, roughly), we’ve enjoyed a steady supply of these breakthroughs over the past decade.

But, when the pressure is on to make a change, most data centers are slow. There’s a lot to consider, especially when we are talking about larger facilities with mission critical workloads, multiple systems, failsafes, cooling setups etc. to take into account.

Do we want fast change?

Pit stop
– Thinkstock / Fuse

The speed of business is only increasing - the successful launch of in-memory analytics such as SAP HANA proves that businesses are taking competitive advantage not just from big data, but fast data. Time is, more than ever before, money. The trend over the near term appears to be hardware struggling to keep up with the demands placed upon it by new software workloads.

When the pressure therefore inevitably comes down to begin migration to faster, more efficient data centers, the DC team’s ability to control the risk of the process is far too limited. When varied workloads are pushed into the data center by the business, or something needs to change within the facility itself, it poses questions about capacity and resilience which need very precise answers in order to avoid lengthy and costly mistakes.

Can we change fast?

The reality for most data center managers is that answering questions like do we have the capacity? or are we exposing ourselves to too much risk? is often a combination of historic trends, experience and a bit of educated guesswork. Therefore, quite rightly, the IT team will appeal for time from the business to assess and get things as close to “right” as possible - who can blame them? The data center is dealing with mission critical applications, SLA’s and customer data, failure is unacceptable.

Right now, the facilities and IT teams in this situation simply can’t change fast. But businesses don’t have the time to wait on the data center. The collective data center industry is doing what it’s done for the past decade - it’s waiting for a technological breakthrough to save the day.

Is there a solution on the horizon?

Yes, there certainly is! New designs for software-defined, homogenized facilities look like they will be going a long way towards responding to this need. The only problem here is that for most organizations (those without ludicrously large IT budgets) the software-defined data centers are between five and 10 years away.

That’s a bit beyond the scope of the timeline most businesses are setting their data center teams to implement change, sadly.

We therefore need to create data centers that can respond to these business needs with absolute control and visibility, to remove risk for the equation. We call this concept the ‘Fluid Data Center’.

What’s a Fluid Data Center?

Rather than an amazing new cooling system, the Fluid Data Center is a concept where capacity and risk can be accurately and quickly snapshotted. A Fluid Data Center can “pour” my resource towards either end of this spectrum with safe knowledge of the impact this will have on either the capacity or the resilience of the entire facility.

It can do this on a case by case basis, and can do this quickly.

It’s achieved by knowing exactly what is happening currently within a data center and then using advanced engineering simulation tools to map out what the impact of any given change would be. Not just in terms of power draw, but on what the impact would be on the air flow of a room, the additional strain on a given AC unit etc. down to the fine detail.

What this tends to result in, aside from happier business teams, is incredibly efficient data centers. At the moment the only solution to not knowing precisely the limit of a DC is to factor in a healthy safety margin - this could be an extra AC unit or two, in simple cases. A fluid data center has these turned off - bringing down PUE - and is able to smartly communicate with the business that these will be turned back on if X occurs.

A Fluid Data Center knows exactly how much juice it has, and the size of the container - and it uses this information to act faster and safer than human predictions could ever achieve.

But the best bit about it is that it’s a solution to a growing problem that is available right now, instead of on the horizon.

Jon Leppard is a director at Future Facilities, a company that specializes in engineering simulation tools.