This summer, in the UK, I experienced a 40°C (104°F) afternoon when my laptop struggled while I pretty much shut down. Others have had far worse.

In July, London saw Google and Oracle data centers failing due to heat. Earlier reports said that data centers in the UK were augmenting their cooling systems with hosepipes, and a London hospital lost its IT services. Over the channel, France hit a new record with a temperature of 45.9°C.

Since then, attention has moved to China, where heatwaves closed factories. A surge of consumers turned on their airconditioning and the authorities ordered factories to close to keep the power grid on. It was the hottest weather for sixty years - and a dip in semiconductor supplies is expected.

Everywhere has suffered

The next news focus was the US, where a truly record-shattering heatwave has only just come to an end in California. Sacramento recorded an all-time high of 116°F (46.7°C), and the State declared an emergency, and blackouts were only just avoided.

Earlier, in July, Texas had had a heat surge during which cryptominers were obliged to shut down, to save the state's creaking grid. Problems with Texas power have been cropping up during any of the natural disasters which hit the state, including the February 2021 storm

It's worth noting just how wide-ranging this is. January saw the Southern Hemisphere's hottest temperature, 50.7°C, in Australia, and the overall picture is of continuing extreme weather, including devastating floods in Pakistan, and drought in the US.

Our main concern in all this should be people, from those made homeless by floods (in Pakistan, hundreds of thousands of them), to individual delivery drivers who have struggled sometimes to keep alive.

But infrastructure is part of the fabric of our lives - and that's been fraying somewhat in recent months, with data centers simultaneously striving to help their neighbors, while keeping themselves operational.

Verizon in California did its bit, switching to backup power to leave grid power for others.

Twitter suffered a data center outage brought on by the heatwave. With other facilities running, the service kept up, but the whole thing could have gone down if any other data center fell over.

In all this, mostly, data centers performed according to plans. Cooling systems are designed with a tolerance, and backup power is on hand.

However, the London Google failure showed that multiple backup options aren't always enough. And the sheer volume of incidents this year showed that there's an underlying change to the conditions these data centers have to endure.

In terms of cooling technologies, regions that have previously been able to support outside air cooling all year round should now be prepared for periods when that is not enough. In the UK, for instance, some data centers have been built with no mechanical cooling. This year, those data centers must be re-evaluating that decision.

Balancing reliability with citizenship

There's a knock-on effect here. If cooling systems are used more often, then that means that data centers will be consuming more energy than they have planned for - a serious issue when energy supplies are under question.

And stressed data centers may also consume more water. Again, that's a development that is timed awkwardly. The Netherlands is mostly at sea level, and not normally short of water, but the drought there caused some to grumble at Microsoft's water consumption.

As always, data centers will come under scrutiny, and may also face criticism from multiple angles.

If you've struggled to keep your facility operational, delivering services to the people around you, it will sting to face anger from those people, if they think you did so at their expense.

Get a monthly roundup of Sustainability news, direct to your inbox.