Cirrascale Cloud Services has added Nvidia HGX H200 servers to its AI Innovation Cloud.
The H200 servers platform is available in the form of integrated baseboards in eight Nvidia H200 Tensor Core GPU configurations.
Generally available as of October 3 on the AI Innovation Cloud, the eight-way HGX H200 provides up to 32 petaflops of FP8 deep learning compute and more than 1.1TB of aggregate HMB3e memory.
The Cirrascale instances include networking speeds of up to 3,200Gbps via Nvidia Quantum-2 InfiniBand networking.
The Nvidia H200 GPUs are the first to feature 141GB of HBM3e memory, with a memory bandwidth of 4.8Tbps, nearly double the capacity of H100s and 1.4 times the memory bandwidth.
“Cirrascale remains at the forefront of delivering cutting-edge generative AI and HPC cloud solutions,” said Mike LaPan, vice president ofmarketing, Cirrascale Cloud Services. “With the integration of the Nvidia HGX H200 server platform into our AI Innovation Cloud, we’re empowering our customers with advanced processing capabilities, allowing them to accelerate AI innovation and deploy models with unprecedented speed and efficiency.”
“By deploying the Nvidia HGX H200 accelerated computing platform, Cirrascale can provide its customers with the technology needed to develop cutting-edge generative AI, natural language processing, and HPC model applications,” added Shar Narasimhan, director of data center GPUs and AI at Nvidia. “Our collaboration with Cirrascale will help propel AI and HPC exploration forward to drive a new wave of industry breakthroughs.”
Cirrascale was notably one of the first companies to get access to the Nvidia H100 GPUs. The company was previously known as Verari Technologies, before being renamed in 2010 as Cirrascale. The cloud services provider is a known customer of Digital Realty data centers.