Between January and April of 2024, two earthquakes measuring 7.6 and 7.2 on the Richter scale hit East Asia, causing billions of dollars of damage and resulting in hundreds of people losing their lives.
Despite the widespread destruction in Japan and Taiwan, where the earthquakes hit, one thing remained relatively unscathed in both countries: their chip fabs.
This is because, although the fabs are situated in areas that are prone to level 4 and 5 seismic activity, many of the factories had been built with enough structural integrity to withstand such natural disasters.
Designing resilient data centers
While some design and construction techniques come as standard, Mauro Leuce, global head of design and engineering at Colt Data Center Services, said that different regions tend to take different approaches when it comes to seismic strengthening.
In Japan, where Colt’s data centers are located, base isolation is the preferred technique, due to the intensity of the energy generated by the fault lines the country sits on.
First used by Colt DCS in 2011, base isolation involves placing flexible bearings or pads made from layers of rubber and lead between the building's foundations and the structure. If an earthquake were to hit, the base isolators would absorb most of the impact and, therefore, reduce the swaying and shaking of the data center.
All of Colt’s data centers are now designed with base isolation as standard. Leuce notes that while it is a technique widely used across Japan, in the US it’s a more recent innovation. By comparison, in Europe where seismic activity is much rarer, the so-called tie-down technique is favored by data center providers.
“From a cost perspective, it's probably easier to isolate the structure so that it will float above the earth while everything else is moving, instead of what is usually done in commercial or residential buildings, where you try to restrain the systems, such as pipework, electrical cables, transformers, generators, anything else,” he says.
“Going around and trying to fix things as strongly as possible against the walls costs more and is much more time-consuming than simply starting with a design that can be isolated from the ground.”
Leuce explains that when Colt DCS designs the layout of a data center, it ensures the most critical parts, such as the data halls, electrical rooms, and other ancillary rooms required for business continuity, are placed on the isolation base. Other elements, such as generators, which are often designed to withstand an earthquake, can then be placed directly on the ground.
He adds that it’s also important to make sure you don’t have anything heavy suspended above your servers in case they become dislodged during an earthquake and end up crushing them.
A final technique employed by Colt DCS is the use of dampers – hydraulic devices that dissipate the kinetic energy of seismic events and cushion the impact between structures.
Having previously deployed lead dampers at its first data center in Inzai, Japan, Colt’s has gone a step further at its most recently built facility in Keihanna, Japan, where it is using a combination of an oil damper made out of naturally laminated rubber plus a friction pendulum system, a type of base isolation that allows you to damp both vertically and horizontally.
“The reason why we mix the friction pendulum with the oil damper is because with the oil damper, you can actually control the frequency in the harmonics pulsation of the building, depending on the viscosity of the oil, while the friction pendulum does the job of dampening the energy in both directions, so you bring both technologies together,” Leuce explains.
As a result of having base isolation in place, when the 9.1 magnitude Tohoku earthquake hit Japan March 2011, Colt’s entire data center only moved by 10cm.
This is also often the case when it comes to ensuring the structural integrity of data centers, a significant number of which are situated on the West Coast of America and in East Asia, where a lot of land covers fault lines, and as such, the earthquake risk is pretty significant.
According to Ibbi Almufti, principal, risk and resilience at engineering consultancy firm Arup, seismic activity is one of the hardest natural phenomena to protect against. As such, he prefers to use terms such as ‘resilience’ or ‘strengthening’ rather than ‘proofing’ when discussing such building techniques.
Some of these techniques are outlined by chip maker TSMC on its website, where the company has a webpage that outlines what it calls “pioneering anti-seismic methodologies.”
Following the 1999 7.3 magnitude Chi-Chi earthquake in Taiwan, the company implemented a series of earthquake protection management plans that surpass the legal requirements of the Taiwanese government.
These include adding seismic anchorage onto all equipment and facilities, installing floating piles at new fabs in Tainan Science Park to decrease seismic amplitude by 25 percent, and appointing 180 earthquake protection guards who are fully trained with seismic knowledge and practices.
As a result, after the most recent round of earthquakes, TSMC reported no major damage to equipment at any of the facilities.
California-based Almufti works with a lot of data center providers and often becomes involved in the process right from the get-go, when companies are still thinking about which sites to build on.
He explains that one of the biggest drivers of site selection currently is power availability but data center providers are often already very clued up about potential hazards when they approach him to consult on projects, listing things he hadn’t necessarily considered, such as the potential for a train derailment to impact a facility built in the vicinity of railway tracks, or a toxic gas spill.
On the flip side, Almufti says some providers believe that a facility built to meet local building codes will automatically have the level of seismic resilience the site requires. However, he explains that this is not the case as building codes are often only designed to protect the lives of building occupants.
There are still a few occasions where, to pardon the pun, these considerations slip through the cracks. Retroactively seismic strengthening buildings is another service Almufti consults on and right now, he’s helping a data center provider that unwittingly constructed its facility on a fault line and is now trying to minimize any potential damage that could befall the structure in the future.
“Most of the time, they’re pretty smart about it,” he says. “You would never knowingly build in those types of zones.”
To further assist less experienced clients, Almufti has also helped to develop what is known as the Resilience-based Engineering Design Initiative (REDi) guidelines, a framework for building resilience that offers Silver, Gold, and Platinum rating tiers.
Created to help owners, architects, and engineers implement “resilience-based design,” according to its website, the REDi Rating System provides a “design and planning criteria to enable owners to resume business operations and provide liveable conditions quickly after a disaster,” such as earthquakes, extreme storms, and flooding.
Putting racks through their paces
It’s all well and good making sure you include base isolation systems, state-of-the-art dampers, and that no heavy pipes are suspended precariously above your servers, but how do you know if that’s enough to protect your data center should an earthquake strike?
When New York was caught off guard by a 4.8 magnitude quake in April, IBM engineer PJ Catalano joked on X, formerly Twitter: “I am happy to report that all 200 mainframes in Poughkeepsie, NY have successfully passed this FREE earthquake test!”
Speaking to DCD after the earthquake, Catalano explains that to ensure its mainframes survive when disaster does strike, a multi-phased testing approach has been designed that all hardware headed for the facility is subject to before being installed.
“We start with computer simulation so, before we build anything, we have models that we take through simulation to gauge what materials need stiffening, weight distribution,” he says. “From there, we go to a second phase of prototyping so we can test the real materials in the real world.”
Once those stages have been completed, IBM carries out shake-table and operational vibration tests. Operational vibration tests are to ensure the rack and its components will continue to function through events that are more akin to a high-speed train line or a busy highway being in close proximity to a data center.
There’s also a shipping test where IBM simulates its racks being in the back of “an 18-wheeler flying down the highway at 60 miles an hour” because, as Catalano notes, “if it can't handle that, then an earthquake is out of the realm.”
Finally comes the earthquake test, where IBM tests the stiffener and the earthquake kit it sells as an additional extra to customers operating in areas with significant seismic activity.
“Anytime we design, build, and ship a brand new generation mainframe, like the z16, we go through this whole suite of testing,” Catalano says. “We generally do it once for each hardware release to make sure that the cage, the frames, all the components that were released, go through this set of testing at least once”
IBM also seeks certification from independent labs to give it additional credibility and prove to its customers that the system is working as it should.
Climate change is the new resiliency frontier
While seismic activity remains one of the main disasters organizations want to protect their critical infrastructure against, Almufti said that the extreme weather being experienced as a result of climate change is bringing with it a whole host of new resilience challenges.
While UK-based data center providers probably don’t have seismic proofing near the top of their construction to-do lists, when the country was struck by a record-breaking heatwave in the summer of 2022, Google, Oracle, and London-based Guy’s and St Thomas’ NHS Foundation Trust all experienced data center outages as temperatures soared to a record 40°C (104°F).
Almufti said flooding is the most intuitive natural disaster to protect against as you just have to raise the foundations or put in retention ponds or storm drainage. He also helps operators predict how future heat waves or extreme temperatures might impact their mechanical systems, allowing that to be factored into any building plans.
Additionally, as most data centers are constructed with precast concrete or tilt-up walls containing very few openings, they’re pretty resilient to most strong winds. However, Almufti explains that unless you build a concrete bunker - something he says is possible but very costly - it is very challenging to protect against another common US weather phenomenon, tornadoes.
“Every [technique] is slightly nuanced and ideally, what you're doing is trying to find synergies between the measures so they're co-beneficial,” he says.
But, to end where we began, Almufti reiterates his opening gambit that seismic remains the most difficult hazard to protect against.
“With seismic, you have to really focus on the inside guts of the structure, as well as the non-structural elements and the envelope,” he says. “That's why I'm most excited about it - I love the other stuff too, but seismic really is the most academically rigorous process that you go through.”