Omdia: Google Cloud’s TPUs have a competitive edge in the race to rival NVIDIA’s AI chips

A new Omdia research reportChecking in with hyperscalers’ AI chips: Spring 2024 finds that Google has taken a clear lead among the hyperscale cloud providers when it comes to their efforts to compete with NVIDIA in AI hardware. 2024 is likely to see as much as $6bn worth of Google Cloud TPUs shipped to the company’s data centers, where they support both in-house projects such as Gemini, Gemma, and, Search, as well as customer workloads through Google Cloud Platform.

All three of the major hyperscale players now have a custom AI accelerator chip, but details of their commercial success or failure tend to be closely held. However, the hyperscalers all use at least one of a group of companies that specialize in semi-custom silicon projects, such as Broadcom, Marvell, Alchip, or Arm plc’s Neoverse CSS service. Close examination of their financial reporting and public statements makes it possible to identify the customers and link them to these outsourcing partners’ revenue numbers.

As such, Omdia finds that the Google Cloud TPUs are doing distinctly better than their competitors, such as Microsoft Azure, Amazon Web Services, or Meta Platforms. Omdia Principal Analyst for Advanced Computing, Alexander Harrowell, says, “this may explain how Google Cloud Platform itself has recently swung into profitability. In parallel with this, the semi-custom chip ecosystem itself is both growing and deepening its offering, supporting an industry-wide trend towards custom silicon.”

One thing yet to be resolved, though, is the identity of “Customer C”, a US-based cloud computing company whose AI chip is set to ramp in 2026 and that is not one of the three majors. “Broadcom, for example, claims it can turn around an AI chip project in a year from signing, so the long lead time implies this is something qualitatively new,” explained Harrowell.

LEAVE A REPLY

Please enter your comment!
Please enter your name here