The US National Nuclear Security Administration (NNSA) is planning to deploy one of the world’s largest clusters of Open Compute hardware.
The High Performance Computing (HPC) system, which will produce between seven and nine Petaflops, will be shared among three of the most important research sites in the US: Lawrence Livermore, Los Alamos and Sandia National Laboratories.
It will be based on Tundra Extreme Scale (ES) servers by American HPC specialist Penguin Computing.
“These computing clusters will provide needed computing capacity for NNSA’s day-to-day work at the three labs managing the nation’s nuclear deterrent,” explained Doug Wade, head of NNSA’s Advanced Simulation and Computing (ASC) program.
“This tri-lab effort will help reduce costs, increase operational efficiencies, and facilitate collaborations that benefit our nation’s security, support academia, and advance the technology that promotes American economic competitiveness.”
The NNSA is a sub-agency of the US Department of Energy that looks after the safety, security and reliability of the nation’s nuclear deterrent. It aims to avoid additional testing of nuclear weapons, replaced by simulations and experiments at its labs.
The agency has chosen Open Compute hardware to build its tri-laboratory Commodity Technology System (CTS-1), which will replace the ageing Tri-Lab Linux Capacity Cluster 2 (TLCC2). The cost of the project stands at $39 million.
Tundra ES servers were developed to apply the benefits of Open Compute to high density HPC environments. They are stripped of the proprietary technologies and features, feeing up space to support three dual-processor servers per rack.
The systems for NNSA will be based on upcoming Intel Xeon E5-2695 v4 chips, expected in the first quarter of 2016.
“This selection further validates how the Tundra ES system, combining the benefits of Open Compute with a high-density compute architecture, can meet the demanding supercomputing needs of an advanced program like CTS-1,” said Tom Coull, CEO of Penguin Computing.
Philip Pokorny, CTO at Penguin, added that the contract shows how the Open Compute design elements can be applied to high-performance computing and deliver similar benefits to those witnessed by large Internet companies.
Penguin Computing will begin delivering its HPC system in the first quarter of 2016, and continue deployment of additional hardware for the next three years.
The Department of Energy has recently contracted IBM to build ‘Summit’ and ‘Sierra’ – two supercomputers that are set to be considerably faster than today’s most powerful machine, Chinese-made Tianhe-2.