AWS has announced the new Graviton2, a CPU based on Arm's Neoverse cores, and the Inferentia, a dedicated inference chip to help its customers run their AI applications.


Unlike rivals Intel and AMD, Arm does not sell its own chips - it licenses out core designs, which other companies build upon, adding elements like memory, storage and PCIe controllers.

According to the announcement at Amazon's re:Invent, Graviton2 processors are ready for customers; the new cards will apparently deliver up to 40 percent improved price/performance over comparable x86-based processors.

Since their introduction a year ago, Arm-based Amazon EC2 instances have been powered by the first generation of AWS's Graviton chips. This second version came about after Amazon decided to tailor the processors for customers who need to run more demanding workloads. The tech giant said diverse workloads require enhanced capabilities beyond those supported by the first-gen Graviton chip.


Amazon’s Graviton2 provides up to 64 vCPUs, 25 Gbps of enhanced networking, and 18 Gbps of EBS bandwidth. Customers can also choose NVMe SSD local instance storage variant (C6gd, M6gd, and R6gd), or bare metal options for all of the new instance types.

AWS CEO Andy Jassy said: "We decided that we were going to design chips to give you more capabilities. While lots of companies have been working with x86 for a long time, we wanted to push the price to performance ratio for you."

Jassy added that Intel and AMD remain key partners for AWS.


Customers will now also have access to the EC2 Inf1 which is a dedicated instance with its own inference chip for machine learning.

Amazon EC2 Inf1 features the AWS Inferentia chip, which delivers very high throughput, low latency, and sustained performance for real-time and batch inference applications.

AWS Inferentia provides 128 TOPS per chip and up to two thousand TOPS per Amazon EC2 Inf1 instance for multiple frameworks such as TensorFlow, PyTorch, and Apache MXNet.