Using images to build diagnostic models of diseases has become an active research topic in the AI community. But capturing the patterns in a condition and an image requires exposing a model to a rich variety of medical cases. It’s well-known that images from a source can be biased by demographics, equipment, and means of acquisition; training a model on such images would cause it to perform poorly for other populations.
In search of a solution, researchers at Microsoft and the University of British Columbia developed a framework called Federated Learning with a Centralized Adversary (FELICIA), which extends a family of a type of model called a generative adversarial network (GAN) to a federated learning environment using a “centralized adversary.” They say that FELICIA could enable stakeholders like medical centers to collaborate with each other and improve models in a privacy-preserving, distributed data-sharing way.
GANs are two-part AI models consisting of a generator that creates samples and a discriminator that attempts to differentiate between the generated samples and real-world samples. As for federated learning, it entails training algorithms across decentralized devices holding data samples without exchanging those samples. Local algorithms are trained on local data samples and the weights, or the learnable parameters of the algorithms, are exchanged between the algorithms at some frequency to generate a global model.
With FELICIA, the researchers propose duplicating the discriminator and generator architectures of a “base GAN” to other component generator-discriminator pairs. A so-called privacy discriminator is selected to be nearly identical in design to the other discriminators, and most of the optimization effort is dedicated to training the base GAN on the whole training data to generate realistic — but synthetic — medical image scans.
In experiments, the researchers simulated two hospitals with different populations, considering a “very restrictive” regulation preventing sharing images as well as models have that had access to images. They used a dataset of handwritten digits (MNIST) to see whether FELICIA could help generate high-quality synthetic data even when both data owners have biased coverage. Additionally, they sourced a more complex dataset (CIFAR10) to show how the utility could be significantly improved when a certain type of mage was underrepresented in the data. They also tested FELICIA in a federated learning setting with medical imagery using a popular skin lesion image dataset.
According to the researchers, the results of the experiments show that FELICIA has potentially wide application in health care research settings. For example, it could be used to augment an image dataset to improve diagnostic, like the classification of cancer pathology images. “The data from one research center is often biased towards the dominating population of the available data for training. FELICIA could help mitigate bias by allowing sites from all over the world create a synthetic dataset based on a more general population,” the researchers wrote in a paper describing their work.
In the future, the researchers plan to implement FELECIA with a GAN that can generate “highly complex” medical images such as CT scans, x-rays, and histopathology slides in real-world federated learning settings with “non-local” data owners.