SAN JOSE : Nvidia on Monday unveiled software aimed at making it easier for businesses to incorporate artificial intelligence systems into their work, broadening the chipmaker’s offerings.
The release highlights Nvidia’s push to expand its presence in the AI application execution sector, called inference, where the company’s chips don’t dominate the market, said Joel Hellermark, CEO of Sana, a maker of AI assistants for companies.
Nvidia is best known for providing the chips used to train so-called foundation models like OpenAI’s GPT-4. Training involves ingesting large amounts of data and is done mostly by AI-focused and large tech companies.
Now, companies of all sizes are scrambling to incorporate those foundation models into their work, which can be complicated. The Nvidia tools released on Monday are designed to make it easier to modify and run various AI models on Nvidia hardware.
“It’s like buying a ready-made meal rather than going out and purchasing ingredients yourself,” said Ben Metcalfe, a venture capitalist who founded Monochrome Capital.
“The Googles and Doordashes and Ubers, they can do all of this themselves, but now that Nvidia has more GPUs available they need to enable more companies to get value out of GPUs,” he said. Those less tech-savvy companies can use the “prepared recipes” to get their systems up, he said.
For example, ServiceNow, a firm that provides software for use by technical support staff inside big businesses, said it used Nvidia’s tools to create a “copilot” to help solve corporate IT problems.
Nvidia has some big-name partners for the new tools: Microsoft, Alphabet Inc’s Google and Amazon will offer them as part of their cloud computing services, and Google, Cohere, Meta and Mistral are among companies offering models. But OpenAI, its financial backer Microsoft and Anthropic, two of the largest providers of foundation models, are notably missing from the list.
Nvidia’s tools offer a potential revenue boost for the chipmaker: They are part of its existing software suite that costs $4,500 a year for each Nvidia chip if used on in a private data center or $1 per hour in a cloud data center.