Intel and Google Cloud are working together to develop a new chip for the data center.
The 'Mount Evans' infrastructure processing unit (IPU) will not be exclusive to Google's cloud, and is expected to be sold to other data center and cloud companies.
The application-specific integrated circuit-based (ASIC) IPU helps hyperscalers offload infrastructure tasks from CPUs to IPUs, freeing up server CPU cycles for revenue-generating tasks.
Google and Facebook research found that infrastructure workloads consume 22-80 percent of CPU cycles across a range of microservice workloads.
The IPU is expected to lead to some changes in server design - notably the possibility of a fully diskless server architecture, instead of each server having its own storage.
"Because it is hard to predict storage usage on a tenant-by-tenant basis, each server must be over-provisioned with storage resources to handle peak storage loads with the traditional data center architecture," Patricia Kummrow, GM at Intel’s ethernet division, explained in a blog post.
"With a diskless server architecture, a central service provides storage resources for all tenants."
Intel already has its own IPU line, but they are all based on Intel Stratix 10 FPGAs, not ASICs. Others, such as Nvidia and Marvell, offer similar offloading chips known as data processing units (DPUs).
"The Mount Evans IPU is based on a best-in-class packet-processing engine, instantiated in an ASIC," Kummrow said.
"This ASIC supports many existing use cases – including vSwitch offload, firewalls, and virtual routing – while providing significant headroom for future use cases. The Mount Evans IPU emulates NVMe devices at very high IOPS rates by leveraging and extending the Intel Optane NVMe controller. The same Intel infrastructure OS that runs on FPGA-based IPUs will run on Mount Evans as well."
The IPU pairs up to four Xeon CPUs with packet processing technology developed by Barefoot Networks, and features up to sixteen Arm Neoverse N1 cores.