LONDON, August 6, 2024: In 2021-2022, CPU vendors raced to get additional AI acceleration built into their instruction sets. Intel, Arm, and AMD all announced the feature was coming. Now, though, only a subset of Intel’s Xeon 6 chips have it, as well as a relatively small number of smartphones. An Omdia research note, Have in-CPU AI accelerators bombed? investigates why.
Omdia’s Principal Analyst for Advanced Computing, Alexander Harrowell said: “There are reasons to think volumes of Intel’s Sapphire Rapids and Emerald Rapids chips with the AMX AI extension were disappointing. Meanwhile, none of the major Arm-based server CPU projects have chosen to use the equivalent extension, and AMD eventually didn’t go ahead with theirs.”
“Results from the MLPerf Inference benchmarking project suggest that adding middle-weight GPUs offers much more performance per dollar than deploying CPUs with in-CPU acceleration. There are special cases where in-CPU extensions help – for example, if it’s a priority to keep the number of cores down because of per-core software licenses – but otherwise it looks like a better deal to either use GPUs or select based on factors such as instruction width and core density.”