US space agency NASA made a grand opening on Thursday of a supercomputer it says will run practice simulations of its 2024 mission to land a woman on the moon.

It built the Modular Supercomputing Facility (MSF) at its Ames Research Center, in the part of Silicon Valley that touches the San Francisco bay, where it could use temperate air to cool its hot computer processors. It cut the energy it normally used to cool its supercomputers by about 90 percent, the agency said in press briefings.

Tsengdar Lee, NASA High-End Computing program manager, and Eugene Tu, NASA Ames Research Center director, cut the ribbon at the opening of NASA's Modular Computing Facility
Tsengdar Lee, NASA High-End Computing program manager, and Eugene Tu, NASA Ames Research Center director, cut the ribbon at the opening of NASA's Modular Computing Facility – NASA Ames Research Center / Dominic Hart


Journalists at the opening were shown to what looked like a wire-fenced parking lot, populated with white shipping containers. Each was a module of the supercomputer, conjoined using modular innovations NASA developed with Hewlett Packard Enterprises.

The first official MSF module contained a bespoke HPE SGI 8600 computer nicknamed Aitken, after Robert Grant Aitken, an astronomer. NASA would use Aitken to simulate moon landings for its2024 Artemis mission.

The Aitken module was capable of doing 3.69 petaFLOPs (quadrillian floating point operations per second), which was equivalent to the world's fastest supercomputer about 10 years ago.

A NASA spokeswoman said it had designed the MSF so that it could add more modules when it needed to boost the computer's processing power. And it could swap components out of a module when it wanted to upgrade its technology. Aitken was the first step.

"We have a big expanse of concrete that can accommodate up to sixteen modules," she said. "The modular technology is very nimble. It allows us to deploy new modules rapidly and to install new hardware. We already have [another] prototype module that has modifications. So the technology has already evolved."

Aitken had 1,150 computer nodes that used 2nd Generation Intel Xeon Scalable processors, with a total of 46,080 processor cores. It had 221 terrabytes (trillian bytes) of memory, about 50,000 times more than the average personal computer. It used Mellanox InfiniBand networking and Schneider Electric SmartShelter Containers, HPE said in a press release.


The MSF's cooling system used about 90 percent less water and electricity than prior NASA supercomputers. Its other supercomputers usually consumed millions of gallons of water and had to come with a cooling tower. The Aitken module passed cool water over the computer nodes in pipes. The pipes fed the heated water into two adiabatic coolers on the roof of the container. An adiabatic system sheds heat by passing it around in a closed circuit, as opposed to usual liquid cooling, that takes heat away by carrying it off down a drain in a wash of mains water. MSF employed outside air in its cooling system as well. Its energy efficiency, measured as power usage effectiveness (PUE), was consequently 1.03. The industry standard for the most energy efficient data centres is 40 times less efficient, at 1.2 PUE.

NASA built a proof-of-concept prototype MSF module, called Electra, in 2016. That had a peak performance more than double that of Aitken, at 8.32 petaflops. It had twice the number of compute nodes, three times more processor cores and nearly three times as much memory. But its energy efficiency was 1.02 PUE. NASA used it to do test simulations of flight propulsion systems for quadcopters and helicopters.