The data center is evolving. With the increasing use of GPUs and CPUs for artificial intelligence (AI) applications, the demand for cloud processing is driving further AI expansion. As the incoming equipment evolves, it’s inevitable that the supporting infrastructure will need to change as well. For example, cabinets used to house equipment are growing taller and larger than traditionally, in order to accommodate the increased volume.

It’s not just the quantity of equipment driving these changes – it’s also about managing the quality and functionality of that equipment. With the intense, high-demand use of GPUs and CPUs in data centers, deploying liquid cooling has become essential, directing the coolant as close to the chip as possible to manage the heat expelled.

Alan Farrimond, vice president at Wesco Data Center Solutions, emphasizes the importance of engineering liquid cooling solutions to meet the specific needs of each system:

“Each solution is engineered and bespoke to a particular application because the environment changes depending on whether you’re in a hotter or colder environment, and how your data centers are already being cooled today.”

He also points out that these unique solutions go beyond just liquid cooling – they involve changes in cabinet design, infrastructure, cabling, load management, and flooring.

“It’s not just about liquid cooling; the environment is changing, and this is reshaping the design of the data center.”

“It’s a flow of events – AI is changing the chip architecture, which is changing the cabinet, which is changing the cabling infrastructure, which is changing the floor and overall design within the data center.”

Internet 3.0

As data centers evolve to meet the demands of AI-driven workloads, the engineering challenges associated with power consumption and cooling are becoming more complex. Andrew Jimenez, senior director for technical sales at Wesco Data Center Solutions, offers an engineering perspective in relation to Farrimond’s point.

While the maximum power drawn from a CPU-based server is typically around 200 watts (W), a GPU-based server consumes more than double that, at around 700 W.

“With eight GPUs per server, it acts as a force multiplier – multiplying the 700 W power consumption by eight gives you roughly 5600 W to 6 kilowatts (kW) of power just from the chips themselves.”

“Given that GPUs typically account for 75-80 percent of the server's total power load, with the remaining 20-25 percent consumed by other components, factoring the power used by other parts of the server, the total power consumption can approach 10 kilowatts. When four or five of these high-power servers are stacked into a single cabinet, the total power and heat they produce create significant engineering challenges.”

Building on Alan's point, Jimenez notes the stark contrast between traditional data centers designed in the early internet eras and today’s requirements:

“Wave one, which focused on basic services, and wave two, which centered on cloud migration, relied on air cooling, which was sufficient for those workloads. But in today's AI-driven Internet 3.0 environment, power demands are dramatically higher, requiring plans for 50 to 100 kilowatts per cabinet.”

Reinforcing the idea that such immense power demands necessitate non-traditional cooling methods, Jimenez emphasizes that air cooling is no longer viable. Instead, methods like direct-to-chip cooling, rear door heat exchangers, and full immersion cooling are now essential engineering challenges in modern data centers.

Not the ‘what’ but the ‘how’

Deciding to upgrade your data center’s cooling system to something more advanced is one thing, but it’s equally important to consider the actual deployment challenges involved.

Although full liquid cooling at scale is still relatively new in the industry, with few companies having successfully implemented it, Jimenez believes Wesco is leading the way in this transformation:

“We’re adapting our business model and partnering with the right firms to install these systems – they are complex and require expertise in mechanical, electrical, and plumbing engineering."

"Contracting firms may be well-versed in cooling systems for buildings, but they often lack the experience needed to deploy these systems within a data center white space. So, the deployment challenge is just as significant as the engineering challenge of removing heat.”

Farrimond builds on this idea, emphasizing that due to the complexity of the solutions, multiple manufacturers may be required even within the same design, making partnerships critical:

“The companies that will succeed are those that understand and can deploy end-to-end solutions. That’s where larger organizations are investing – building expertise in design and forming strong partnerships with infrastructure, cabling, cabinet, power, and liquid cooling providers.”

He adds, “Not only does this need to be done end-to-end, but at scale. With data centers being built globally, organizations need worldwide capabilities to meet demand.”

For example, Farrimond highlights that Wesco’s global engineering team operates seamlessly across regions to deploy liquid cooling, whether in Portugal, Malaysia, or Brazil. With such an integrated operating model, Wesco aims to provide consistent service for customers as a unified organization, whether they’re buying in Italy, Japan, or elsewhere.

Optimizing data center productivity

One of the key challenges in enhancing data center efficiency to drive productivity and scaling is enabling systems to intercommunicate for optimal performance and preventive maintenance. This spans all aspects, from electricals and HVAC to temperature control, security, AV, productivity, and more, across the entire data center. Customers and owners find particular value in this as it allows them to manage and exceed established SLAs.

Enter entroCIM.

To address these needs, Wesco offers entroCIM (central intelligence manager), a smart facility platform that aggregates data from IT and OT systems, subsystems, sensors, and third-party applications. By analyzing all monitoring systems within the data center, the tool enables operators to improve productivity and prevent unexpected failures by scheduling engineering tasks before issues arise. According to Farrimond, entroCIM tracks smart metering, access control, building management, intelligent lighting, AV, electrical switchgear, industrial control, smart power, video surveillance, and more.

Sustainability is the number one use case

With the advent of liquid cooling systems integrated into data centers, it is appropriate that sustainability metrics reflect these advancements. While power usage effectiveness (PUE) was once the primary indicator of sustainable operations, other factors such as water usage should now be considered as part of a holistic sustainability approach. Jimenez believes entroCIM to be a powerful tool for monitoring a data center’s actual performance against its set KPIs, providing a central point of aggregation for this information.

He sees software as key to bridging this gap, particularly through automation:

“This allows you to pinpoint inefficiencies, which can be addressed either through software adjustments or by augmenting staff to fill those gaps. It's a powerful tool for uncovering issues you may not have even realized existed, offering valuable insights into how your operations are performing.”

Connected and integrated into various systems within the data center or building, entroCIM serves as the foundation for data collection. By collecting data from sensors, it compiles information and makes informed decisions, orchestrating across multiple platforms. Operating on AI and machine learning, the tool makes autonomous decisions to optimize data center operations.

The physical shape of data centers is shifting, with denser white space and increased gray space needed to maintain equipment. As these changes unfold, managing the turnover and recycling of equipment becomes an essential component of a full-circle approach to data center evolution, ensuring sustainability and long-term efficiency.

Skills: Shortage turned into demand

Having introduced entroCIM and its primary uses in optimizing data center productivity and sustainability, Farrimond takes this one step further by explaining how this technology influences the planning and pre-provisioning of essential skills:

“By bringing all that data together, you can effectively manage the skills required across different areas of the data center. This enables you to pre-plan and address skill shortages, especially when demand is high. From a productivity standpoint, the data allows you to ensure readiness wherever and whenever they’re needed, globally.”

To contextualize the issue, Jimenez highlights that “60 percent of data centers today are tasked with trying to fill a headcount for their data center operations. There's just not enough skilled labor to fill the huge growth and demand for data center operations and data center construction.”

It’s clear that using entroCIM to identify operational gaps is a key capability. Jimenez makes a critical point: it’s not just about acquiring skills and labor, but also about retention. How can we effectively operate in today’s environment, with the growing energy footprint of data centers, while also addressing the challenges of skill shortages?

Is AI the be-all and end-all?

Here the conversation shifts to the philosophical question of using AI to address skills shortages. On one hand, AI and machine learning serve a need at a critical time of workforce shortages. On the other, there's a concern that AI-driven software systems might eventually reduce the need for human involvement, particularly in productivity tasks. Jimenez shares his perspective:

“I believe there will be some level of coexistence – it’s just a matter of what that balance looks like. AI can certainly handle repetitive tasks, but for physical tasks, like making a patch in a data center, I don’t see it replacing humans anytime soon.”

Farrimond elaborates:

“There will always be physical tasks in a data center. While software can analyze data and predict when something needs attention – whether it’s fixing an issue or preventing one – it can’t perform the physical work itself. A data center is like a house that stores data; software can assist with certain tasks, but the manual, hands-on jobs still require people to get them done.”

Ultimately, it seems Wesco is well-positioned to manage the physical evolution of data centers, with assistance from software that continuously adapts to meet emerging demands. Farrimond concludes:

“It’s an end-to-end solution with a roadmap that evolves as functionality grows. The future of data centers will change, and it's crucial to anticipate those shifts.”

Learn more about Wesco's entroCIM platform here.