As expected, Computex turned into AI PC central with the first official keynote of the conference. AMD, which has integrated NPUs into its mobile PC SoCs for several generations, came out swinging with the Ryzen AI 300 mobile SoC. The new SoC exceeds Microsoft’s requirements of 40 TOPS of NPU performance by setting a new mark with 50 AI NPU TOPS using the INT8 data type. Multiple OEMs were there to support AMD, but there are still questions remaining about both the Ryzen AI 300 and the new PCs it will be powering.
AMD did not provide complete specifications for the Ryzen AI 300 series of SoCs, but we did get a glimpse of some of the major features, which include up to 12 cores/24 thread new 5th generation Zen cores, an XDNA 2 NPU, and RDNA 3.5 graphics with 16 compute units. The first two devices in the product family will be the Ryzen AI 9 HX 370 and Ryzen AI 9 365 with slightly few CPU cores, cache, and graphics, but with the same NPU capable of 50 AI TOPS.
According to AMD the new XDNA 2 NPU will provide up to 5x the performance of the previous generation NPU at up to 2x the power efficiency and will leverage a block floating point data format that allows for 16-bit accuracy at 8-bit processing speeds.
PCs with the new devices will be introduced in July and will be capable of supporting on device AI applications like Microsoft’s Copilot, Recall, Co-creator, and live captions and real-time translation. AMD is also working with more than 150 independent software developers (ISVs) to optimize AI applications for the platform.
What we don’t know yet are the details, such as the overall mobile SoC AI TOPS (CPU+GPU+NPU), the power consumption of the devices and platforms, or the prices of the devices or AI PCs. However, with the competitiveness of this segment with Qualcomm and Intel SoCs, we would expect AMD-based AI PCs will be price competitive.
While the PCs launching with the Ryzen AI SoC 300 SoCs will not be available for more than a month after those launching with the Qualcomm Snapdragon X Elite SoCs, a month is not very long and they will arrive before the back-to-school buying season. Some Microsoft Copilot applications will be available soon, but it may take months or years before most applications are capable of efficiently leveraging the NPU. In addition, the software will greatly mature over time. Tirias Research believes that as developers learn how to leverage on-device NPUs, they will quickly exceed the capabilities of current NPUs. As a result, we are in a new performance race to raise the AI performance capabilities of PCs year after year. Some of this performance will be done through the NPUs, some through the other SoC processors, and some through discrete GPUs and/or dedicated AI accelerators. This is an exciting time in the PC segment, but it is only the beginning.
Please note that AMD also introduce new desktop PC processors, embedded SoCs, and data center products, we will save those for other articles.