The evolution of artificial intelligence (AI), particularly generative AI products such as ChatGPT, has dominated headlines over the past year. Beyond its potential to disrupt or enhance everyday life – a debate I’ll leave for another time – an often-overlooked impact of AI, and indeed all widescale technology adoption, is that on data centers.

Having experienced the introduction and swift uptake of mobiles and the cloud, data centers are well-versed in taking a proactive approach to new technology.

And with AI still in a relatively immature state, this is a crucial time for data center professionals to be considering how they will respond to the impending AI boom.

Adapting to new workloads

We can categorize AI into four main categories: Natural Language Processing (NLP), computer vision, machine learning, and robotics. While robotics is particularly latency sensitive, typically requiring Edge computing solutions in very close physical proximity to the process being managed, we expect the first three to really ramp up the demand for data center solutions.

And meeting this growing demand will be no mean feat. Not only do we need to think about the physical implications of hosting a huge number of servers to accommodate a higher-density workload, but we also need to consider how to integrate new techniques – such as liquid cooling and immersive cooling – to combat the heat that these servers will be generating.

Moreover, the loads won’t be steady. We are anticipating that there will be enormous surges at any one time, whereas historically, data centers have managed reasonably flat, consistent loads.

One of the greatest challenges is that AI is not a homogeneous entity, rather it is a technology that is split into two distinct phases: training and inference. I tend to liken this distinction to an athlete – first, they prepare for a race (AI training), before heading to a competition to put their drills to the test (AI inference).

Successful data centers will learn to adapt to both. AI training will require less focus on resilience and redundancy, and more on cost, PUE, and general efficiency. Inference, on the other hand, is very latency sensitive and will require proximity to a metropolitan hub to ensure quick response times for user interfaces and applications.

The regulatory perspective

The difficulty for regulators is not knowing exactly how AI is going to play out. It’s very much in its infancy stage, and, understandably, regulators want to cover all potential hazards

The EU’s AI Act is a clear example of this, with regulators categorizing applications into four key risk levels: unacceptable risk, high risk, limited risk, and minimal or no risk. Elsewhere, the NIS2 Directive is going to broaden the number of sectors expected to comply with its original regulations around cybersecurity, including now the digital sphere.

The challenge for many industries, including data centers, will be ensuring compliance with evolving regulations. AI is advancing more rapidly than anything we have seen in recent years, and data centers are sure to feel the knock-on effects as regulators continue to update parameters and define new boundaries of risk.

Addressing key shortages

It is well known that the strategic value of microprocessors has made them subject to government trade restrictions. Combined with the acceleration of diverse AI adoption, and the huge workloads those applications demand, graphic processing units (GPUs) are becoming scarce.

Scaling up production isn’t exactly an easy solution – in fact, recent data has found that building a two nanometer chips factory in the US or Europe would cost around $40 billion. While we are seeing a concerted effort to spread production across multiple regions, and serious pivoting from businesses like Vultr and Northern Data to create an entirely new ‘AI cloud’ industry, until supply matches demand, the microprocessor shortage is sure to remain a pain point.

Data center shortages are also a cause for concern, but the challenge here lies less in innovation and more in the finite resources of land and power, not to mention the politics that surround them.

Tackling data center shortages requires a two-pronged approach: a) maximizing power capacity to deliver the low latency levels that AI demands, while b) doing so in areas where there is more land available. Finding remote locations for AI training to take place, so that it doesn’t cannibalize the workloads of inference-heavy metropolitan areas, is one method that will prove extremely valuable.

Reconfiguring the data center for AI

This concept of maximizing what is already available could determine how we reconfigure data centers, in that it puts sustainability at the heart of the strategy.

In France, ‘zero net artificialization’ is an agreement proposed to stop urban sprawl and maintain biodiversity in green spaces. For data centers, this means leveraging the potential of existing buildings and densifying these sites as much as possible. But to do so, some reconfiguration will be required.

We will need to assess how we maximize space in these existing sites so that we are prioritizing efficiency to support high AI workloads. Sustainability is no longer an intangible concept, but a very real issue which should be defining reconfiguration strategies across the globe.

If we do not start making better decisions that will prolong the longevity of data center products, such as switching to liquid and immersive cooling techniques, our efforts to accommodate AI-dense infrastructure will essentially become futile.

Staying ahead of the AI revolution is an ambitious aim for any industry – data centers included. But by embracing advanced cooling techniques, complying with evolving regulations, and championing sustainability at every opportunity, I believe we have the potential to thrive in this new era of technology.

How is Data4 adapting to this new wave of demand from AI?

The first thing we are doing is looking to procure bigger campuses with more power to accommodate these requirements. We have a 180MW site in Frankfurt and two sites in Paris, one with 120MW and another with 250MW. We believe the scale of these sites is ideal for accommodating the needs of AI.

The second key adaptation is building much larger, more dense facilities. GPU’s have thousands of cores all running complex calculations simultaneously.

In an AI environment the servers need to be very near each other, often linked together with InifiBand connections rather than ethernet. This creates a very dense (and more efficient) environment within the data center which, in turn, creates challenges for cooling the server.

Liquid and immersive cooling technologies will be required to manage the heat produced in these environments. The other benefit of larger environments, in the context of AI training workload, is that they can host a higher number of training sessions, each one with a fluctuating consumption profile.

With higher numbers, individual fluctuation effects are eroded, and the overall consumption is stabilized, thus improving the overall efficiency and the environmental footprint of the data center.