HomeTech PlusTECH & OTHER NEWSNvidia researchers devise method for training GANs with less data

Nvidia researchers devise method for training GANs with less data

Nvidia researchers have created an augmentation method for training generative adversarial networks (GANs) that requires less data. Nvidia has made GANs for creating works of art like landscape paintings and recently one for video conferencing. (A GAN is a form of AI that pits a generator network against a discriminator network to create images or videos.) Training GANs can require upwards of 100,000 images, but an approach called adaptive discriminator augmentation (ADA) detailed in the paper “Training Generative Adversarial Networks with Limited Data,” enables results with 10 to 20 times less data.

“The key problem with small datasets is that the discriminator overfits to the training examples; its feedback to the generator becomes meaningless and training starts to diverge,” the paper reads. “We demonstrate, on several datasets, that good results are now possible using only a few thousand images, often matching StyleGAN2 results with an order of magnitude fewer images.”

Earlier this year, researchers from Adobe Research, MIT, and Tsinghua University detailed DiffAugment, another approach to augmentation for GANs.

Nvidia VP of graphics research David Luebke told VentureBeat that anybody who has done pragmatic data science in the wild knows the vast majority of time is spent collecting and curating data. This is sometimes referred to as the ETL pipeline: to extract, transform, and load.

“That alone takes a huge chunk of pragmatic boots-on-the-ground data science, and we think this [approach] is super helpful because you don’t need nearly as much of that [data] to get useful results,” he said.

This can be even more important when working with annotators who have little time to spare, he said.

Authors of the paper believe reducing data constraints can empower researchers to inspect new use cases for GANs. Beyond creating fake photos of people or animals, researchers believe GANs may have applications in medical imaging data.

“If you have a radiologist who specializes in a particular condition … to have him or her sit down and label 50,000 images for you probably isn’t going to happen … but to have them label 1,000 images seems quite possible. It really changes the amount of effort that a practical data scientist needs to put into curation of the data, and as a result it makes it a lot easier to do exploration,” Luebke said.

A paper detailing the approach was published this week as part of the NeurIPS conference for neural information processing networks, the largest annual AI research conference in the world.

“Training Generative Adversarial Networks with Limited Data” wasn’t the only GAN-related paper. Another research paper introduces Discriminator Driven Latent Sampling (DDLS), which achieves improved performance for off-the-shelf GANs when assessed using the CIFAR-10 dataset. That paper was written by MILA Quebec Artificial Intelligence Institute and Google Brain researchers, including Yoshua Bengio and Hugo Larochelle, a Google Brain research scientist and general chair of the NeurIPS conference.

By VentureBeat Source Link

Technology For You
Technology For Youhttps://www.technologyforyou.org
Technology For You - One of the Leading Online TECHNOLOGY NEWS Media providing the Latest & Real-time news on Technology, Cyber Security, Smartphones/Gadgets, Apps, Startups, Careers, Tech Skills, Web Updates, Tech Industry News, Product Reviews and TechKnowledge...etc. Technology For You has always brought technology to the doorstep of the Industry through its exclusive content, updates, and expertise from industry leaders through its Online Tech News Website. Technology For You Provides Advertisers with a strong Digital Platform to reach lakhs of people in India as well as abroad.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

spot_img

CYBER SECURITY NEWS

TECH NEWS

TOP NEWS