Schneider Electric, originally known for its hardware solutions in energy management, has bolstered its portfolio to become a leading provider of industrial software.

In a world where data center customers require a plethora of tools to efficiently manage and analyze large volumes of operational data, the software too needs to evolve to facilitate the demands of AI-driven applications.

In this DCD>Talk, Todd LeBlanc, global solution architect at Schneider Electric, introduces the company's “one software strategy.” This strategy aims to create a unified platform that serves as a central hub for data collection, storage, and analysis, enabled by Schneider’s recent acquisitions of AVEVA, ETAP, and RIB Software.

Transformation of infrastructure

An unavoidable topic in the data center industry is AI and its impact on computing. LeBlanc highlights this shift with the transition from central processing units (CPUs) to graphics processing units (GPUs), driven by the need for higher rack densities to support AI workloads:

"More compute power means we can no longer rely solely on air cooling. The industry is evolving toward liquid cooling solutions, both at the rack level and directly to the chip." He continues:

“It’s clear AI is forcing massive scale, which means our customers cannot rely on just a single vendor. Interoperability of equipment in the data center has become even more important.”

Traditionally, data centers have managed power distribution, cooling, and processing in a gradual, variable manner, adjusting resources to meet standard computing demands.

However, AI applications require massive computational power, often concentrated in short, intense bursts. This shift means that AI-heavy workloads may demand all servers or equipment to be turned on simultaneously.

LeBlanc further explains that the growth of AI requires more hardware – servers, GPUs, and specialized processors, or ‘white space,’ leading to higher equipment density and more heat generation, which intensifies cooling challenges. He adds:

"With more equipment comes more data, which drives the need for advanced software to collect and analyze it. Robust software and power monitoring are now essential to protect these investments. Leading co-location providers have already moved to higher-density server rack designs and introduced direct liquid cooling at the rack level to efficiently manage the increased heat."

Software to serve demand

To combat the challenges associated with AI workloads, namely excess heat generation, higher energy consumption, and greater strain on infrastructure, LeBlanc emphasizes Schneider’s new white space portfolio.

This includes advanced NetShelter SX enclosures, aisle containment, and rack PDU updates, all designed to support the high demands of AI while remaining energy efficient and regulatory compliant.

Additionally, the unified operations center (UOC) provides a ‘single pane of glass’ view, allowing data center operators to monitor and manage all systems centrally.

"We achieve this through real-time anomaly detection, which intelligently identifies irregularities by analyzing multiple variables instead of relying on a single alarm."

"Predictive maintenance powered by AI algorithms also anticipates equipment failures and schedules timely interventions, minimizing downtime and optimizing limited maintenance resources."

Designing AI-ready data centers is complex, and any inefficiencies in layout or system integration can lead to costly operational issues. LeBlanc also notes that Schneider’s design-phase simulators model different systems, allowing teams to optimize data center layouts and configurations for better operational efficiency and reduced future risks.

Scalability, flexibility, and adaptability

In order to support a system’s ability to grow, adjust, and evolve in response to changing environments, LeBlanc highlights three key facets of software integral to success in the AI era:

Scalability: "Software plays a key role, being highly templatized yet dynamic, allowing for maximum scalability and reusability across both brownfield and greenfield data centers. While data center teams differ in how they enforce standards, they all share the need to monitor statuses and alarm points."

Flexibility: “Flexibility is key, achievable when the software is hardware and software agnostic, capable of integrating with third-party devices and systems. This includes physical devices like PDUs, sensors, and software systems such as asset management and ticketing.”

Adaptability: “Adaptability is also essential, as software must evolve with changing environments and requirements, particularly as new technologies like AI shift communication needs, with increasing demand for protocols like MQTT and API integration.”

To learn more about what is required of data center software to handle the AI boom, listen to the full DCD>Talk here.