The growth and demand to process unprecedented volumes of information is forcing organizations, from all industries, to re-evaluate their computing capabilities and the infrastructure needed to support it. High Performance Computing (HPC) is just one example of this.

Typically, the HPC market has been reserved for power-hungry customers including researchers, scientists and engineers, but the growth of data demanding technologies including cloud, IoT and big data has meant that HPC has become more readily embraced across wider communities; a fact not lost on the data center industry.

INL's Falcon supercomputer
INL’s Falcon supercomputer – INL'

A multi-billion dollar market

According to recent research, the HPC market is expected to reach $33bn in value by 2022, growing at a compound annual growth rate (CAGR) of five percent from 2016. Understandably the data center community wants to be a part of this conversation, but what are the considerations a data center operator will need to account for to become HPC compliant?

Firstly, the biggest consideration, from a capex perspective, is space needed to house the technology? Data centers are often confined to cities where square footage exists at a premium. Because of this, retrofitting the necessary next-gen technology into these existing facilities is not always possible, with a more viable option often being to open a new site where it can have a suitable infrastructure in place to meet the needs for HPC and scale with customer demands.

Obviously, this again represents a significant expense for an organization and, before undertaking such a commitment, it’s important to understand where demand for HPC services is likely to come from.

In addition to space considerations, other facility upgrades are required to meet the demand for HPC. One such example is cooling.

Data center providers are going to great lengths to minimize their cooling costs, with some providers installing their data halls in cold weather climates. Facilities are advised to run at temperatures anywhere between 20-30°C to achieve an optimal environment for the servers. With the installation of high performance computers however, processing power is required to be significantly higher, so heat management is of vital importance.

Some HPC providers have tried to ensure consistent cooling by deploying liquid cooling, larger fans or conductive cooling methods. Typically however, the rate at which heat is produced is greater than the rate at which cooling accelerants or fans can dissipate this heat. It is important for centers to be built with systems in place that far exceed the maximum cooling requirements, especially if there is the possibility in the future that HPC systems will be installed.

For those that already have systems in place and wish to install HPC technology, significant capex spending will be needed to upgrade data halls to meet the required thermal management needs.

Directly tied into the first two requirements for HPC, space and cooling, adequate processing power is essential. The necessary power infrastructure must be able to cope with these demands in order to fulfill the requirements of both the HPC and the mid-range servers, depending on the data halls configuration. Those looking to install HPC need to factor in the provision of high density power and how this has implications for the cooling requirements and the following environmental impact.

New data center designs need to take into account the ramifications that HPC can have on the cooling and power requirements, even if they are not currently installed on site. Those retrofitting existing data centers to accommodate HPC need to be aware that without a large capex spend, their site is unlikely to be optimal for the running of high performance computers.

For those looking for data centers that have HPC capability, it is important to be aware of the space and growth potential, cooling requirements available and the necessary power provisions to provide such high processing speeds.

Greg McCulloch is the CEO of Aegis Data