GPU titan Nvidia on Monday morning unveiled what it calls AI computing on your desktop, the DGX Station A100, which will be sold by a variety of partners and is expected to be available “this quarter,” Nvidia said.
The announcement comes at the start of SC20, a supercomputing conference usually held in San Diego every year and this time around held as a virtual event given the COVID-19 pandemic.
Nvidia calls the DGX Station A100 an “AI appliance you can place anywhere.” The box, measuring 25 inches high, 10 inches across, and 20 inches deep, comes with four GPUs, either existing 40-gigabyte A100 GPUs, or a newly unveiled 80-gigabyte version. It weighs 91 lbs, though fully outfitted, it tops out at 127 lbs. The total system has a maximum memory size of 320 gigabytes. More information is available in the spec sheet.
Nvidia touts the throughput of the 80-gigabyte version of A100 for large workloads.
The A100 80GB also enables training of the largest models with more parameters fitting within a single HGX powered server such as GPT-2, a natural language processing model with superhuman generative text capability. This eliminates the need for data or model parallel architectures that can be time consuming to implement and slow to run across multiple nodes.
The A100 chips, based on Nvidia’s “Ampere” GPU architecture, were first unveiled in May of this year.
With today’s announcement, the building of AI systems by chip makers is now officially a trend. Nvidia’s box comes a year after startup Cerebras Systems unveiled a workstation-sized AI computer containing its WSE chip, the world’s largest computer chip. At the time, Cerebras made significant mention of the size difference between its workstation computer and the data center pod needed to get equivalent super-computing power from Nvidia GPUs.
Such prowess, Nvidia hopes, will keep its systems at the top of the MLPerf benchmark tests for AI performance.
Also: Nvidia intros new Ampere GPUs for visual computing
Another competitor, Graphcore, in July announced it would produce its first dedicated AI computer system, after initially only selling chips.
If 320 gigabytes of GPU memory is not enough for you, Nvidia also announced an update to its DGX A100, which is a 6U rack-mounted system. It now has the option of up to 640 gigabytes of GPU memory. The new DGX A100 is also expected to start shipping this quarter, Nvidia said.
Nvidia has no plans to sell the computer system directly, it will rely on partners. Pricing is to be announced by partners later this week, Nvidia said.
Also: Nvidia Ampere, plus the world’s most complex motherboard, will fuel gigantic AI models
Along with the new chip and the new system, Nvidia announced the newest version of its Infiniband networking technology from its Mellanox Group. The technology, in the form of switches, cables, adaptor cards, and the new “data-processing unit,” or DPU, daughter cards, run at 400 gigabytes per second.
One of the main accomplishments of the new spec, according to Gilad Shainer, Nvidia’s head of networking technology, is that it is able to use copper wiring for lengths up to 1.5 meters, with optical transponders used beyond that. “We were happy we were able to do copper at 400 gig at all,” said Shainer in a media briefing.
The new spec is not only faster but allows for greater interconnection of a greater number of devices, including connecting up to one million GPUs.
Nvidia’s head of accelerated computing will give an address at SC20 today at 3 pm, Pacific time, which you can catch here.