TSMC is Expecting One Trillion Transistor GPUs in a Decade

Taiwan Semiconductor Manufacturing Company (TSMC), the world’s largest chipmaker, has embarked on an ambitious journey to create a GPU with one trillion transistors, roughly 10 times the amount found in today’s biggest chips. This goal, however, is expected to take a decade to achieve.

The AI boom is currently the main driver for increased compute power in chips, especially GPUs. As we reach the end of the traditional node-shrink era, the way forward is clear: chiplets and 3D stacking. AMD’s MI300 series of accelerators, which sports 146 billion transistors across 13 stacked chiplets, are indicative of what future chips will require.

Vertical-stacking technologies like chip-on-wafer-on-substrate (CoWoS) can allow for up to six reticle fields’ worth of chips on a single package. TSMC also touts its system-on-integrated-chips (SoIC) technology, which is used to stack high-bandwidth memory (HBM) chips. Current methods can stack eight layers, with 12 layers coming next. The transition from solder bumps between layers to “hybrid bonding” using copper connections will further increase density.

The challenge in semiconductor manufacturing is about to get much more difficult and complex than previously, as it used to be about shrinking the node. Now that there’s not much left to shrink as we arrive at 2nm and beyond, the challenge will be in the form of connecting chiplets both horizontally, like with Nvidia’s Blackwell GPU, and vertically, like with AMD’s MI300 accelerators. 3D-stacking chiplets will be the standard to increase compute power going forward.

read more > www.extremetech.com

NIMBUS27