At its data center and AI event in San Francisco, AMD pitched its AMD Instinct MI300X GPU as built for generative AI workloads.

Supporting up to 192GB of HBM3 memory, the accelerator can support some large language models, such as the 40 billion parameter Falcon-40B, on a single chip.

AMD CEO Dr Lisa Su AMD Instinct MI300X.jpg
– Sebastian Moss

The GPU is based on the company's next-generation CDNA 3 architecture, and will begin sampling to key customers in Q3. The chip has 153 billion transistors, 5.2TBps memory bandwidth, and 896GBps Infinity Fabric bandwidth.

At the event, the company also announced the Infinity Architecture Platform, which combines eight MI300X GPU in an industry-standard design.

"AI is really the defining technology that's shaping the next generation of computing," CEO Dr. Lisa Su said. "When we try to size it we think about the data center AI accelerator TAM growing from something like $30 billion this year, over 50 percent compound annual growth rate to over $150bn in 2027."

She added: "We truly designed this chip for generative AI. I love this chip, by the way."

AMD also said that the MI300A, an APU accelerator for HPC and AI, has begun sampling to customers. It has 128GB of HBM3 memory, 24 Zen 4 CPU cores, and more than 146 billion transistors.

Get a monthly roundup of Hyperscale news, direct to your inbox.