The Future of AI Supercomputers May Lie in Light-Based Interconnects

OpenAI and other AI leaders believe that the next big leap in machine intelligence will require new forms of computer hardware. One such proposal involves connecting GPUs with light. A startup called Lightmatter has pitched a technology that might enable hyperscale computing by letting chips talk directly to one another using light.

Data today generally moves around inside computers, and in the case of training AI algorithms, between chips inside a data center, via electrical signals. However, converting signals back and forth between optical and electrical creates a communications bottleneck. Lightmatter aims to directly connect hundreds of thousands or even millions of GPUs using optical links. This should allow data to move between chips at much higher speeds than is possible today, potentially enabling distributed AI supercomputers of extraordinary scale.

Lightmatter’s technology, known as Passage, takes the form of optical interconnects built in silicon that allow its hardware to interface directly with the transistors on a silicon chip like a GPU. The company claims this makes it possible to shuttle data between chips with 100 times the usual bandwidth. Passage, which will be ready by 2026, should allow for more than a million GPUs to run in parallel on the same AI training run.

NIMBUS27

read more > www.wired.com