The tech industry is buzzing with the term “NPU,” or Neural Processing Unit. This specialized processor is designed to handle the complex mathematical computations required for machine learning algorithms. Unlike CPUs (Central Processing Units) and GPUs (Graphics Processing Units), which handle a variety of tasks, NPUs are specifically engineered to manage the intense demands of neural networks.
The NPU is part of the CPU and is designed to process machine-learning tasks in parallel. This means it breaks up requests into smaller tasks and processes them simultaneously, making it incredibly efficient at handling AI tasks such as natural language processing and image analysis.
The standard for judging NPU speed is in TOPS (Tera Operations Per Second), a measure of how many trillion operations the NPU can process per second. This is particularly important for AI applications that require rapid processing of vast amounts of data.
The rise of NPUs is largely due to the ongoing hype cycle around AI. Tech companies are increasingly integrating NPUs into their devices to support on-device AI. For instance, Microsoft’s new ARM-based Copilot+ PCs use the Qualcomm Snapdragon X Elite and X Plus CPUs, both of which offer an NPU with 45 TOPS. This allows the PCs to support on-device AI, a feature that was promised with the introduction of the so-called “AI PC.”
The introduction of NPUs has not been without controversy. There have been claims that some AI features can run on other ARM64-based PCs without relying on the NPU. This has led to questions about the necessity of NPUs and whether they truly offer significant advantages over traditional CPUs and GPUs.
Despite these debates, the tech industry continues to extol the benefits of NPUs. From Apple to Intel to small PC startups, everyone is talking about them. As AI continues to evolve and become more prevalent, it’s likely that we’ll be hearing a lot more about NPUs in the future.
Read more: gizmodo.com