Cookie policy: This site uses cookies (small files stored on your computer) to simplify and improve your experience of this website. Cookies are small text files stored on the device you are using to access this website. For more information on how we use and manage cookies please take a look at our privacy and cookie policies. Some parts of the site may not work properly if you choose not to accept cookies.


Let's measure FUE: Floor Utilization Effectiveness

As a service provider to the colocation industry, we see the interiors of many wholesale, large retail and retail data centers both in the mid-Atlantic and around the country. While power density design parameters have been slowly growing over time, the rate at which rack densities are increasing is clearly outpacing the rise in facility density (Watts per square foot or square metre).

Back in 2000, I was fortunate enough to be involved in some state-of-the-art data center design-build projects for a company that brings back memories to many — PSINet. Based in Northern Virginia, PSINet was one of the first commercial Internet service providers. At that time, I remember building some incredibly fault-tolerant facilities that could support as many as eight servers to a rack, nearly 2 kW. That was hot back then.


Source: Kirlin

Too hot for the building

Recently, I was speaking with a cloud firm who is using 19’ racks that hold 114 servers and run at 16 kW per rack. My acquaintance told me that the company really wanted to use faster processors and run at 30 kW per rack, but that his colocation provider would not let them. The building, constructed only four years ago, simply was not designed to support that kind of power density.

I often visit other data center sites where 8,000-square-foot, 1.2 MW critical load pods are leased by Internet unicorns and the tenants can only occupy 50 percent of the space before reaching the design capacity. In a room that can fit 300 IT cabinets, only 150 cabinets are actually operating, leaving 4,000 square feet of empty, unusable space. This presents a challenge for the colocation industry and exposes an inability or unwillingness — as well as a growing need — to design data center facilities that can actually satisfy the increasing demand for higher density customers.

The needs of high performance computing and the capabilities of today’s infrastructure providers are incompatible

Like you, I’ve heard from just about every colocation salesperson that their site can support 20 kW to 30 kW racks. While I don’t doubt these claims, it’s very easy to see that today’s colocation facilities can only support this kind of extreme density on a sporadic basis. I am not aware of any commercial facility that is willing to offer an entire pod to a user who will fill every usable square foot with 30 kW racks.

I also find it odd that virtually every high-performance computing system I’ve come across has been installed in an enterprise, non-commercial colocation facility. I suspect this is the case, in part, because the commercial facilities do not even consider onboarding users that require a number of 30 kW to 50 kW cabinets. To put it simply, the needs of the high performance computing customer and the capabilities of today’s facility infrastructure providers are all too frequently incompatible.

It is time for design professionals and colocation service providers to consider ramping up the density of their designs. Cloud services, High-Performance Computing (HPC), Big Data Analytics, and other applications require density. From my own research, these customers are not opposed to housing their systems in secure commercial facilities — they just don’t have anywhere or anyone they can turn to for help.

psinet telus1

PSInet/Telus facility

Source: Urbacon

Calculate floor usage

Next time you visit a large colocation site that utilizes the pod design, see if you can compute the Floor Utilization Effectiveness (FUE). This may seem silly to some, but I believe the time has come to consider such a metric. I define FUE as the ratio of space that is leased from the service provider, divided by the amount of actual space which may be fully occupied at the desired power density.

A building that can support a customer’s power needs, while at the same time allow them to fully utilize the entire pod, would achieve a very respectable FUE, perhaps 1.0. If a building is constructed at 200W / sq. ft. and the customer really needs 400W / sq. ft., the FUE would be 2.0.

With some attention paid to this metric, colocation providers can be evaluated on their ability to provide efficiently designed space, and their customers will not be expected to pay for space they cannot use.

There is a cost for all of that wasted space in a high FUE building. Think about it. If the architectural elements of the building are to be underutilized by 50 percent, which would be the case if only half of the floor space can be occupied, then the lease rate paid by the tenant will naturally reflect those costs. If the building envelope can shrink to fit the actual requirement, then the user of that colocation facility is going to get the best value for their monthly spend. Alternatively, for those buildings that are already constructed, if the power / cooling density can be doubled, then high-density users may be willing to pay big dollars to house their systems there.

Some colocation consumers are paying too much for their power because providers are constructing spaces that are designed with their own needs in mind. An efficient data center design is one where the environment, including space, power and cooling, is a good match for the computing load. When 50 percent of the floor space must sit empty because the power and cooling capacity of the pod are already maxed out, then the providers have not done a good job of creating the right colocation environment for that customer.

The real challenge in the near term is designing facilities that are adaptable for both high and low density users. If a colo facility can be engineered for density flexibility, perhaps by using scalable power and cooling platforms, then providers will be able to target the capacity that is appropriate for each prospective tenant. Only then will the colo buyers be able to maximize the efficiency of their colo budgets. Today’s one-size-fits-all approach is not a good long-term design strategy.

It’s going to be interesting to see what happens over the next few years. If cloud computing becomes the standard for the applications we use on a daily basis, and HPC applications continue to surge, colocation service providers will eventually face a choice: design around tenants’ needs or face the possibility of becoming commoditized as the sea of 2000-2015 era data centers compete for the remaining low-density customers.

Steve Altizer is president of Compu Dynamics



Related images

  • psinet telus1

Readers' comments (4)

  • The Green Grid already came out with a SUE (space usage effectiveness) metric a couple years ago. I always thought that this had some value, but have not heard much about it actually gaining much traction. Regardless, whether it is FUE or SUE, you highlight a problem that is real and most of the industry is still behind the curve on it.

    Unsuitable or offensive? Report this comment

  • Thank you Ian, I thought I was up to date on all things Green Grid, but there is indeed a proposed space usage effectiveness metric!

    In fact the acronym is not "SUE", but "SpUE" (pronounced "spew"?). But it does look at how many racks can actually be placed in a data center, and what the power density is.

    It seems to have been discussed at the Green Grid Forum in 2014, with a presentation from China (GG members only)

    And a year later, a press release announced a White Paper on SpUE, in Japanese

    Peter Judge
    Global Editor, DCD

  • If only your commercial data centres would supply cold water to thier customers they could make a lot more money per square foot.

    WE made a rack that can cool 200kW and actually installed one and ran it for 2 years with zero failures. But nobody wants to buy one!

    Problem is most data centers are run by managers that are highly skilled in operations but don't have much appreciation for technology.

    Unsuitable or offensive? Report this comment

  • Long time since I've seen that logo!

    I ran the UK Hosting Centre at PSINet at that time and the colo boom really focused us on space utilisation.

    I used a revenue per rack metric as a KPI for the dara centre and to show the cost in repositioning some of the large freestanding equipment would be rapidly recovered.

    Agree with the point about the power ceiling being hit resulting poor utilisation and even saw that on a tour of a brand new data centre with only one tenant who's cage rendered the remaining space useless. I took my client elsewhere as I didn't want them paying for the real estate that couldn't be leased.

    Unsuitable or offensive? Report this comment

  • Something important to consider here is that Build Density equates to the average across a whole site, and traditionally co-location providers have to cater for a wide variety of customer requirements. This can be anything from a relatively low, to a very high density deployment, and customer deployment density is something that co-location providers generally have little control over. At VIRTUS, we encourage our customers to increase the density of their IT Infrastructure deployment, which helps the customer save on their costs, and helps both VIRTUS and the customer to maximise efficiency. The reason that we can offer this hyper-efficient solution is due to the intelligent design of our data centres. Whilst built to an average of 2kW/m2, we utilise the ‘flooded room principle’ for cooling, allowing significantly higher densities to be deployed (up to 40kW/per rack) without the need for costly supplementary cooling, which would also require significantly lower densities to be deployed in the room to achieve the correct balance. Further to this, VIRTUS’ modular design supports this methodology, so that if we have a Data Hall of relatively low density, we can easily deploy enough cooling to support this with the option to scale up in line with customer demand.

    Unsuitable or offensive? Report this comment

Have your say

Please view our terms and conditions before submitting your comment.



More link