Silicon vendor Nvidia is set to release a data center platform built specifically to accelerate machine learning workloads. Initially, it will consist of software tools collected under the Nvidia Hyperscale Suite and two graphics accelerators.

The first is Tesla M40, a high-performance GPU that will help design and ‘train’ machine learning models. The second, Tesla M4, is intended for large deployments in the data center, to help deliver services that feature machine learning to the end-user.

The announcement by Nvidia follows the news that Google has donated its TensorFlow machine learning framework to the open source community - and TensorFlow relies on GPUs, not CPUs, to carry out its calculations.

Nvidia Tesla M40
Nvidia Tesla M40 – Nvidia

Artificial intelligence

Machine learning is a relatively new concept. It describes algorithms that can process huge amounts of data to learn patterns and build ‘neural networks’ in order to carry out complex tasks such as identifying a person on a picture or interpreting voice commands.

For example, machine learning tools developed by IBM for the Watson ‘cognitive computing’ platform can establish someone’s personality traits after reading just 3500 words written by the subject, or ‘extract’ their interests, activities and hobbies from publicly available photos and videos.

GPUs can carry out certain highly parallel calculations much faster than Intel’s most powerful CPUs, and have been tipped as the perfect hardware companion for machine learning software. Today, most servers do not feature a dedicated GPU, with system designers opting for simple embedded chips instead.

Nvidia’s machine learning platform aims to increase the importance of graphics processing in the data center. Its Hyperscale Suite includes cuDNN algorithm software and GPU-accelerated FFmpeg multimedia codecs, as well as Nvidia GPU REST Engine and Image Compute Engine.

On the hardware side, we can expect two new GPUs. The Tesla M40 is aimed at developers and researchers, offering high throughput and the ability to unite multiple devices into clusters. It features 3072 CUDA cores and 12GB of GDDR5 memory, for up to seven Teraflops of single-precision performance. Nvidia claims that for machine learning workloads, the M40 can deliver eight times more compute than a traditional CPU.

Meanwhile Tesla M4 is a small, energy efficient device - essentially a dedicated video card for the server that’s expected to outperform anything embedded on the motherboard in the CPU. It has been optimized for tasks like video transcoding and image processing, and Nvidia claims it can complete these tasks while consuming up to ten times less power than a CPU.

“Machine learning is unquestionably one of the most important developments in computing today, on the scale of the PC, the internet and cloud computing. Industries ranging from consumer cloud services, automotive and health care are being revolutionized as we speak,” said Jen-Hsun Huang, co-founder and CEO of Nvidia.

“Machine learning is the grand computational challenge of our generation. We created the Tesla hyperscale accelerator line to give machine learning a 10X boost. The time and cost savings to data centers will be significant.”

The Tesla M40 GPU accelerator and Hyperscale Suite software will be available later this year. The Tesla M4 GPU will be available in the first quarter of 2016.

The news will interest anyone who has been following the public unveiling of TensorFlow, Google’s internally developed AI engine. According to Wired’s Cade Metz, the move could signal a wider change in the world of computer hardware, with GPUs – previously underrepresented in the data center – taking up a more prominent, if not the leading role.