The Leibniz Supercomputing Centre (LRZ) in Garching near Munich, Germany has officially opened the SuperMUC-NG supercomputer, Europe's fastest high performance computing (HPC) system with a theoretical peak performance of 26.7 petaflops.

The launch comes just a week after the JUWELS supercomputer in Jülich was inaugurated - although that was a symbolic ceremony for a system that has been in operation since earlier in the year.

Hot stuff

SuperMUC from above
The original SuperMUC – IBM

LRZ is now home to three SuperMUCs - Phase I, Phase II and NG Phase I, but the first two systems are expected to be retired in 2019.

SuperMUC-NG features 6,400 Lenovo ThinkSystem SD650 direct-water-cooled computing nodes, with more than 300,000 Intel Xeon Skylake cores, 700 terabytes of main memory, and 70 petabytes of disk storage.

The supercomputer, like its predecessor, uses hot water cooling to chill its systems. “When we first came up with hot water cooling they said it will never work. They said you’re going to flood the data center. Your transistors will be less efficient, your failure rate will be at least twice as high… We never flooded the data center," Dr Bruno Michel, head of IBM Zürich Research Laboratory’s Advanced Thermal Packaging Group, told DCD in a profile of SuperMUC and hot water cooling earlier this year.

The SuperMUC contract, and most of the related technology, moved to Lenovo in 2015, after the company acquired IBM’s System x division for $2.3 billion.

Martin Hiegl, business unit director for HPC and AI at Lenovo, told DCD that the latest version of SuperMUC uses a new hot water cooling system: “The first SuperMUC is based on a completely different node server design. We decided against using something that’s completely unique and niche; our server also supports air cooling, and we’re designing it from the start so that it can support a water loop - we are now designing systems for 2020, and they are planned to be optimal for both air and water.”

The heat-dissipating advantages of the cooling system has also meant that "at LRZ, the CPU will support 240W - it will be the only Intel CPU on the market today running at 240W,” Hiegl said.

This month's launch was attended by prominent members of the State of Bavaria and the Bavarian Academy of Sciences and Humanities (BAdW). ”With SuperMUC-NG, we will continue to provide state-of-the-art HPC technology and compute power to foster research and science in Bavaria, Germany, and Europe,“ Professor Dr. Dieter Kranzlmüller, Director of LRZ, said.

Running the JUWELS

SuperMUC-NG's launch ends the JUWELS system's brief stint as Germany's most powerful supercomputer.

Officially opened earlier this month, but in operation for a little longer, the 'Jülich Wizard for European Leadership Science' system has a theoretical peak performance of 12 petaflops. That figure is expected to grow next year with a "booster" upgrade that is designed for massively parallel algorithms that run more efficiently on a manycore platform.

JUWELS, which replaced JUQEEN, is an Atos Bull Sequana X1000 system, with 2,575-nodes featuring 24-core Intel Xeon Skylake processors. But, it "is not an off-the-shelf solution,” Jülich Supercomputing Centre director Professor Thomas Lippert said.

"As one of the largest German research centres, we are in a position to work together with our partners Atos from France and ParTec in Germany to develop the next generation of supercomputers ourselves. For us, modular supercomputing is the key to a forward-looking, affordable, and energy-efficient technology, which will facilitate the realization of forthcoming exascale systems.”

Lippert's flexible design, sometimes known as smart exascale, has been funded by the EU research project Dynamical Exascale Entry Platform (DEEP). The concept was first trialled at JSC in the JURECA supercomputer, another modular machine.

“The modular concept comprises a supercomputer made of several specialized components that can be combined dynamically and flexibly depending on requirements using the ParaStation software of Munich HPC enterprise ParTec,” Dr Dorian Krause, the department head responsible for the deployment and operation of JUWELS at JSC, said.

Among the workloads expected to run on JUWELS are those from the EU's ambitious Human Brain Project, a subject that will be the focus of an extensive feature in the next issue of the DCD Magazine.