There’s no doubt the data center landscape is changing — from the rise of large public cloud service providers to the expansion of colocation operators. And despite consistent growth at the Edge, on-premises data centers aren’t going away. In fact, a recent What's Your Edge global survey showed that nearly half of IT infrastructure remains on-premises.

Against this backdrop, enter the proliferation of mission-critical applications requiring high-density, high-performance computing (HPC). Take healthcare as just one example. A recent survey of Medi-Cal providers in California showed a 20 time increase in the number of telemedicine visits in just the first year of the pandemic, from two percent of visits prior to the pandemic to 45 percent.

Additionally, telehealth claims as a percentage of all medical claims increased 11 percent from November to December 2021, amid the Omicron surge.

We’re seeing these high-density use cases across nearly all industries. According to IDC, augmented and virtual reality will increase in value by more than 68 percent by 2025, driven by both businesses and everyday consumers.

The continued proliferation of these HPC applications begs the question: Can your data centers keep up?

What’s old can be new again

It’s not always possible or cost-effective to build new facilities. At the same time, space is often at a premium and you need to make the most of your current floorspace.

Rather than giving up on your existing data center, now is the time to give it new life. An upgrade can help you accommodate the increased processing power and rack densities these applications require.

Retrofit solutions can help extend the lifespan of your existing data centers and make sure they are ready to handle future growth.

According to Gartner, “Although continued investment in an older, more traditional data center may seem contradictory to current trends, if done wisely, it can yield significant benefits to short- and long-term planning.”

When you are ready to embark on a retrofit journey to extend the life of your existing data center, I recommend focusing on these key areas: enhancing delivery, reinventing your infrastructure, and maximizing space.

Enhance IT delivery and reinvent your infrastructure

It’s no secret that the power needed to cool a data center is significant, and higher density racks can often require more than 1.5 kilowatts (kW) of cooling load for every 1 kW of IT load.

If you are retrofitting your data center for higher densities in a small footprint, liquid cooling can be a viable option. Liquid cooling leverages the higher thermal transfer properties of water or other fluids to support efficient and cost-effective cooling of high-density racks.

Here’s a simplified explanation of how it works: A cool liquid is circulated to cold-plate heat exchangers embedded in the IT equipment. This provides efficient cooling, since the cooling medium goes directly to the IT equipment rather than cooling the entire space. 

The density enabled by liquid cooling also eliminates the need for expansions or new construction, or to build smaller-footprint facilities. It also enables processing-intensive Edge applications to be supported where physical space is limited.

Liquid cooling is currently available in a variety of configurations, including rear door heat exchangers (RDHx), direct-to-chip cooling, and immersion cooling. All three configurations can increase your data center’s efficiency and reliability, boost sustainability, lower the total cost of ownership, and improve utilization.

Although a RDHx doesn’t bring liquid directly to the server, it utilizes the high thermal transfer properties of liquid to increase the efficiency of racks. In a passive RDHx, a liquid-filled coil is installed in place of the rear door of the rack, and as server fans move heated air through the rack, the coil absorbs the heat before the air enters the data center.

In an active design, fans integrated into the unit pull air through the coils to increase unit capacity. The RDHx allows power that was once used for cooling to be reused to support other building systems.

By contrast, direct-to-chip liquid cooling uses cold plates that sit atop a server’s main heat-generating components to draw off heat through a single-phase or two-phase process. In a single-phase process, cold plates use a cooling fluid looped into the cold plate to absorb heat from server components.

In the two-phase process, a low-pressure dielectric liquid flows into evaporators, and the heat generated by server components boils the fluid. The heat is released from the evaporator as vapor and transferred outside the rack for heat rejection.

The third option for liquid cooling — and perhaps the most innovative — is immersion cooling. With immersion cooling, servers and other components in the rack are submerged in a thermally conductive dielectric liquid or fluid.

In a single-phase immersion system, heat is transferred to the coolant through direct contact with server components and removed by heat exchangers outside the immersion tank.

In two-phase immersion cooling, the dielectric fluid is engineered to have a specific boiling point that protects IT equipment but enables efficient heat removal. Heat from the servers changes the phase of the fluid, and the rising vapor is condensed back to liquid by coils located at the top of the tank.

Adoption of liquid cooling is only expected to grow. According to the Vertiv Edge survey, six percent of data center managers around the world are currently using liquid cooling in Edge deployments, a figure that’s nine percent when considering North America alone. And by 2025, Gartner predicts, “data centers deploying specialty cooling and density techniques will see 20 percent to 40 percent reductions in operating costs.”

Small space? No problem

In addition to making better use of your space through liquid cooling, HPC applications may also benefit from a micro data center.

Micro data centers have all the same components you would find in a typical data center, but on a much smaller scale, including an uninterruptible power supply (UPS), rack power distribution unit (rPDU), rack cooling unit, and remote monitoring sensors and software. Micro data centers typically support critical loads of no more than 100-150 kW.

The entire system is enclosed into the size of one standard IT rack, making the micro data center a good fit for an existing network closet or small server room. We also see micro data centers work well at the Edge, particularly in open office spaces, retail stores, and healthcare clinics.

While they may not address every computing challenge, micro data centers can provide an affordable, reliable IT solution when you need critical applications in a small or contained footprint.

Retrofit done your way

If there are increasing demands on your infrastructure to maintain the reliability of critical applications, a retrofit of your existing data center may be in order. You can bring new life to your data center to keep it running productively and cost-effectively for years to come.

Your retrofit could include a combination of liquid cooling or a self-contained micro data center, but it’s important to remember there’s no one-size-fits-all approach. All environments and use cases are unique, so you should take time to closely examine your organization’s needs and goals before you make this investment.

It’s also appropriate to take your time, rather than starting an overhaul all at once. A phased retrofit with the help of an expert or trusted services partner can be a path toward improving your infrastructure in a more manageable way.

No matter how you choose to retrofit, the upgrades to your data center can lead to many organizational benefits in the near- and long-term. To learn more about building future-ready data centers using disruptive technologies, download the Vertiv newsletter with insights from the experts at Gartner.