OpenAI, Intel, and Qualcomm talk AI compute at legendary Hot Chips conference

sdim1851

Intel’s Senior Principal Engineer for system-on-chip design Arik Gihon takes audience members through the design of Intel’s latest data center chip, Lunar Lake.

Tiernan Ray for ZDNET

The science and engineering of making chips dedicated to processing artificial intelligence is as vibrant as ever, judging from a well-attended chip conference taking place this week at Stanford University called Hot Chips.

The Hot Chips show, currently in its 36th year, draws 1,500 attendees, just over half of whom participate via the online live feed and the rest at Stanford’s Memorial Auditorium. For decades, the show has been a hotbed for discussion of the most cutting-edge chips from Intel, AMD, IBM, and many other vendors, with companies often using the show to unveil new products. 

Also: Linus Torvalds talks AI, Rust adoption, and why the Linux kernel is ‘the only thing that matters’

This year’s conference received over a hundred submissions for presentation from all over the world. In the end, 24 talks were accepted, about as many as would fit in a two-day conference format. Two tutorial sessions took place on Sunday, with a keynote on Monday and Tuesday. There are also thirteen poster sessions. 

The tech talks onstage and the poster presentations are highly technical and oriented toward engineers. The audience tends to spread out laptops and multiple screens as if spending the sessions in their personal offices. 

sdim1854-large

Attendees tend to set up camp with laptops as if it’s a makeshift office. 

Tiernan Ray for ZDNET

sdim1878-large

Attendees at Hot Chips 2024

Tiernan Ray for ZDNET

Monday morning’s session, featuring presentations from Qualcomm about its Oryon processor for the data center and Intel’s Lunar Lake processor, drew a packed crowd and elicited plenty of audience questions. 

In recent years, a big focus has been on chips designed to run neural network forms of AI better. This year’s conference included a keynote by OpenAI’s Trevor Cai, the company’s head of hardware, about “Predictable scaling and infrastructure.” 

img-0128

OpenAI infrastructure engineer Trevor Cai on the predictable scaling benefits of increasing computing power that have been OpenAI’s focus since the beginning.

Tiernan Ray for ZDNET  

img-0130

Tiernan Ray for ZDNET  

Cai, who has spent his time putting together OpenAI’s compute infrastructure, said ChatGPT is the result of the company “spending years and billions of dollars predicting the next word better.” That led to successive abilities such as “zero-shot learning.”

“How did we know it would work?” Cai asked rhetorically. Because there are “scaling laws” that show ability can predictably increase as a “power law” of the compute used. Every time computing is doubled, the accuracy gets close to an “irreducible” entropy, he explained. 

Also: What to expect from Meta Connect 2024: A more affordable Quest, AR glasses, and more

“This is what allows us to make investments, to build massive clusters” of computers, said Cai. There are “immense headwinds” to continuing along the scaling curve, said Cai. OpenAI will have to grapple with very challenging algorithm innovations, he said.

For hardware, “Dollar and energy costs of these massive clusters become significant even for highest free-cash-flow generating companies,” said Cai.

The conference continues Tuesday with presentations by Advanced Micro Devices and startup Cerebras Systems, among others.

Source Link

LEAVE A REPLY

Please enter your comment!
Please enter your name here