AMD has unveiled the MI325X, it is a part of AMD’s Instinct range of accelerators and is expected to arrive later this year.
The MI325X is a high-bandwidth AI accelerator that boasts a memory capacity of 288GB, which is more than twice that of Nvidia’s H200 and 50 percent more than Nvidia’s Blackwell chips. This significant increase in memory capacity is due to the transition to HBM3e, which also boosts the MI325X’s memory bandwidth to 6TB/sec.
The MI325X is designed to deliver superior performance, offering 1.3 petaFLOPS of dense BF/FP16 performance or 2.6 petaFLOPS at FP8. Despite these impressive figures, the MI325X still manages to outperform the H200 at any given precision.
The MI325X’s enhanced memory capacity and bandwidth are set to address major bottlenecks in AI inferencing. With the ability to support 1 trillion parameter models, the MI325X is poised to redefine AI performance and efficiency.
Read more: www.theregister.com