Remember “Dude, Where’s My High-Density Rack?”
It was June of 2023 – across the board, the industry had some grandiose visions, perhaps a little bit too much hope, and a whole lot of uncertainty surrounding the deployment of high-density racks.
So much so that Uptime Institute reported that barely anyone had them at all despite the AI boom of early 2023. Even though everyone decided that GPUs were the go-to hardware for computational power – Nvidia boosted their shipment by over a million units that year – it didn’t send out as big of a wave as people envisioned.
Sure, we went from discussing high-density racks to figuring out the most efficient ways to build and deploy them, but did we start to roll them out?
Unfortunately, no. The reality of rollout and where these racks are isn’t what most industry pundits want to hear: the racks are physically present, but they’re either underutilized, over-allocated, or, in some cases, they’re sitting empty altogether.
So, if this high-density revolution is here – why aren’t we ready to embrace it fully? Where’s the Issue?
1. Traditional mindsets and market hesitancy
The hurdle we always seem to face never went away: industry inertia and old-school thinking. Now, we won't have a hate party on data center operators for having a method and sticking to it; their risk aversion and standard practices are not completely unfounded.
Splitting rack density into additional cabinets usually works. So they do it with these high-density racks, but all it does is create unnecessary sprawl, inflate costs, and overpopulate the data hall.
Without standardization – whether it’s flow rates, temperature thresholds, or coil designs – it feels like designing an airplane mid-flight at 40,000 feet. Without those standards, how can they feel comfortable deploying racks appropriately?
Well, new colocation players and other partners have come onto the scene with the infrastructure to support high density; why don’t people make the move to these facilities?
The trust isn’t there. Legacy players view these as flashy and inexperienced. New money vs. old money, if you will.
2. Timeline confusion
Another roadblock is the confusion over deployment timelines.
Is it recommended to do a phased adoption process where you test in pockets or roll out higher density in phases?
Is it a race against the clock where we have to look at everything holistically – from the building to the cooling to the infrastructure to the racks?
No one seems to agree – even the experts. So instead, our risk-averse industry sticks to experimenting in silos, hesitant to make bold, comprehensive moves.
3. Liquid-cooling aversion
Yes, the stigma around liquid cooling persists.
Water and hardware have never mixed in the minds of operators, and teams want guarantees that nothing will go wrong – an understandable concern given that cooling and power failures account for 71 percent of all outages, and we’re talking hundreds of thousands or millions of dollars in capital. Of course, they don’t want to take risks.
But nothing is ever 100 percent right?
71 percent of outages are caused by power and cooling failures now, and most are not using liquid cooling.
Negative pressure systems have significantly reduced if not eliminated the risks involved with liquid cooling, but stigmas are hard to break even as PUE remains flat, and we know traditional cooling is not capable of supporting high-density without backsliding.
4. Infrastructure challenges
People still need to find ways to retrofit existing facilities effectively. Still, it is a massive undertaking, and hardly anyone can commit to finding ways to support the loads, cooling, and power these racks require across an entire data center. Not to mention the need to balance low and high density in the same data hall.
Plus, most companies are not Amazon or IBM. The funds needed to start from scratch and build a brand-new, high-density facility simply do not exist. Even if that’s the smart business move in the long run.
With all that in mind, what’s to come?
Even with the challenges the industry faces, we are rocketing towards high-density deployment as a necessity.
AI demands continue to grow and become incorporated into more applications, products, services, and internal company operations. Hardware supply is catching up from prior (and current) supply chain issues. The natural resources and man-made infrastructure available to support data centers without high density are dwindling.
Smaller players have huge opportunities to gain a foothold in this market. When no one is doing the “right thing” with deploying high-density or operating high-density colos, they’ll get to say we were the first to do it and have the most experience.
Nautilus, for example, offers the EcoCore COOL CDU. We have fast deployment, innovative liquid-cooling tech with negative pressure environments for common user connections, and the cooling flexibility people crave in a risk-averse industry where we lack an existing standard.
Bigger players are still working on rolling out effective high-density facilities for themselves and for others. These plans are not launching until late 2025, or into 2026 at best.
Regardless, providers big and small recognize the revenue potential here – if everyone is too afraid to do it on-prem or in their owned facilities, they can swoop in and say “We’ll do it for you.” We’re going to see:
- A boom of AI as a service very similar to the cloud adoption curve; and
- A reduction of on-premises AI deployments
Are we fully ready now? Not quite – but the trajectory is much clearer because we know how to do it right.
It’s going to be a slow rise before global adoption, and no one has a crystal ball, but we at least have a map.
More from Nautilus Data Technologies
-
Sponsored Redrawing the data center map: How power, AI, and liquid cooling are shaping the future of sustainable compute
As AI and power demands reshape data centers, innovative cooling, and adaptable site selection are driving a new, sustainable era of advanced compute infrastructure
-
Sponsored Finding the Goldilocks Zone: Perfecting AI-ready data centers
How collaborative innovation is key to overcoming challenges and building truly AI-ready infrastructures
-
Sponsored Building the elusive AI data center: What we’re doing wrong
How AI's growing demands are disrupting the stability of traditional data centers, forcing them to adapt rapidly to handle higher power and cooling requirements