Efficiency. It’s become a leading theme in the data center world. From hyperscalers to enterprises, organizations are seeing the costs and complexities of conventional data center builds, and looking for alternatives that consume less power, minimize water utilization, run at higher density/require smaller footprints, can be built more quickly, with fewer resources, and cost less money to build and run.
However, we still see that the industry, as a whole, isn’t as efficient as it could be. The latest Global Data Center survey shows that PUE is still stubbornly stuck ‒ and that isn’t factoring in efficiencies around design, site selection, resource utilization, waste, and capacity utilization.
As we at Nautilus look over the industry, watching new plans and groundbreaking innovation across the world, in data centers large and small, we see a few fundamental issues that stand in the way of greater efficiency. If all of us are serious about making the industry more sustainable, we have some problems to address, in order to accelerate change.
The first of these problems is a tension between uniformity and flexibility.
Efficiency is a worthy goal, but it’s only one objective in a data center build, and, in many cases, efficiency isn’t necessarily the most important consideration. Data centers exist to support applications, not every application is the same, and so, possibly the most important consideration is that data centers have to be flexible enough to support different applications.
Customers have a breadth of requirements; ‘These workloads need hot, high-performance infrastructure, those can run on older, cooler servers’, ‘these applications need more redundancy and these services require more storage’, ‘this rack needs 10kW of cooling, that rack, 25kW’. So, rack by rack, room by room, our customers demand a breadth of capability that encompasses their needs.
However, this need for flexibility stands in the way of data center design and implementation consistency.
Today, because of diverse and distinctive needs, customers want some level of a bespoke solution, tailored to their needs. Whether they’re a co-location company wanting a 100MW site, or an enterprise wanting a 25MW site, everyone has unique geographical requirements and unique design requirements, including power requirements.
Because of this, the industry hasn’t been able to standardize. It’s interesting to us how divergent the industry has become. Design and implementation, whether we’re looking at Tier III compliance on redundancy and maintainability, or just density ranges, all are in flux, all are different from data center provider to data center provider, and all come with varying levels of efficiency.
The data center industry remains one where one-size-does-not-fit all. As in tailoring or automobile manufacturing, custom designs are less efficient than standardized designs. In both those worlds, very few customers get a custom design. Instead, 90 percent of them buy something off the rack or from the showroom, maybe have a dress or a suit hemmed or an SUV with an off-road package added, but those products are created in a factory, with automation, and shipped to a site.
That’s the most efficient way to deliver a product, and even with modular designs, the data center industry isn’t there yet. Our customers’ demands for variability impact our ability to drive out inefficiencies, whether at the level of site selection, material procurement, construction, or operation.
Of course, one of the inefficiencies we have to drive out is cost. In a world without zero-interest loans, buyers are more cost-conscious than ever. How do we, as an industry, manage to strike a balance between the need for bespoke designs, and the need to come in under budget? How do you cater to 90 percent of the customer base and meet each of their individual requirements, without adding so much complexity or cost that buying becomes impossible?
In a world with rising component and personnel prices, the race is on to sidestep costs. Standardization is the way to do that ‒ but our customers don’t necessarily see it that way.
The first problem is balancing unique needs with demands for standardized efficiency. Another challenge we face is the issue of stranded capacity.
You can probably think of a hundred ways we’re building stranded capacity. Whether that's in battery storage capacity, bulk storage or bulk generation ‒ redundant generators in case the power goes out ‒ or simply un-utilized space, our designs often come with elements that aren’t used, or are almost never used. That’s wasteful, and our customers pay the price.
There’s another element of this problem that’s easily overlooked. A data center not only consumes resources, it also has the potential to create them ‒ and the industry as a whole has significant stranded capacity for resources we own.
For example, data centers create heat. Most of them simply dump the heat into the air or into a water supply. That said, many data centers have backup generators or solar and battery banks which could be utilized in the case of a grid failure. But most data center designers and operators don’t think about ways to utilize those resources by providing them, or selling them, to other organizations like municipalities. Instead, we'd rather sit around with unutilized generators or dump millions of gallons of hot water into the atmosphere.
This is understandable. Data center operators are risk averse, they want to futureproof designs, and so on. Working out how to sell resources like power and water is much harder and more complex than not selling them. But to pursue greater efficiencies, we need to face the stranded capacity problem, that everybody in the industry recognizes, but nobody has worked out how to solve.
These are hard problems, and so far, there’s no consensus in the industry on how to solve them.
What does the industry need to do?
First, we need to cultivate a deeper understanding of how data centers fit into a local, regional, and global ecosystem of resource creation and consumption. A data center doesn’t exist in isolation.
- It’s built on real estate.
- It’s made up of materials.
- It interacts with power grids, fiber infrastructure, water supplies, roads.
- It needs on-site construction and management personnel.
- It’s probably not the only data center owned by the organization.
All of these factors influence efficiency. A data center provider:
- Could choose a brownfield site.
- Could choose to build with renewable resources and energy.
- Could build a denser design that reduces footprint and building material consumption.
- Could provide power to grids during peak load or provide hot water to office buildings.
- Could be built in areas with existing fiber, and roads, and personnel.
- Could be built with less on-site redundancy, instead designing for full data center failover.
Second, we have to take a position.
It’s time for thought leaders to come to agreement on the need for always optimizing efficiency. There’s been a tendency to give customers what they want ‒ “the customer is always right” ‒ but in some cases, what they want is unreasonable, and best practices would lead in another direction. Just because customers ask for something doesn’t mean that the request is reasonable when evaluated against the need for efficiency.
We also need to think more deeply. We can get better at understanding ways to solve problems. We need to demonstrate that we know why they have a requirement, and be able to show them that we can meet the intent of that requirement in a different, more efficient way.
We’ve already observed that basic choices like site selection, redundancy, and resource production can deeply modify how efficient a data center can be. It’s on us to make that point to our customers.
Essentially, we need to have the courage to influence the direction that a customer goes. We can get better at recommending more common, more standardized solutions that still meet our customers’ need for flexibility.
Efficiency matters, but it doesn’t exist in isolation. Thinking broadly and deeply is on us ‒ we need to lead that charge with our customers. There are ways to address needs ‒ agility, flexibility, resilience ‒ without going down the path of costly, inefficient, bespoke solutions that come with stranded capacity and high costs. Efficiency is on us. Let’s make it happen.
More from Nautilus Data Technologies
Sponsored The future of data centers: Seven key trends
Examining the next ten years. A look at what the future holds for the data center industry
Sponsored Power and water: How can data centers be made genuinely sustainable?
Data centers are made from concrete, cement, steel and glass, and consume vast amounts of power. So how can operators genuinely become “sustainable”, let alone achieve net-zero?