Deep Fakes – How Generative Adversarial Networks (GANs) Are Spurring Healthcare AI Performance

Google+ Pinterest LinkedIn Tumblr +

NewtonX has previously written about the challenges associated with accessing sufficient data for training healthcare-related AI. Aside from privacy issues, there’s also the problem of data labeling and structure consistency from organization to organization. Hospitals are notoriously siloed and even research organizations struggle to implement consistency across teams, not to mention between organizations. However, based on a series of NewtonX expert calls with AI researchers in the healthcare sector on the subject, a new method for generating artificial data may be the key to accessing training data.

Nvidia recently used Generative Adversarial Networks (GANs – also frequently referred to as ‘Deep Fakes’) to create realistic MRI images of brain tumors with the intent of training AI. The company’s paper on the subject served as inspiration for us to talk with researchers who are actually using these techniques, or are considering piloting these techniques, to generate fake patient data. The data and insights in this article are informed by these NewtonX expert consultations.

The Copycats: How Fake Data Can be Used for Real Applications

GANs essentially find structure in a set of labeled data which allows them to generate their own realistic fake data. They do this by having a generator neural network, which generates fake data, and a discriminator algorithm (which says whether or not the output is fake)  take turns learning from each other. Together, these networks are called a GAN, and can produce highly realistic fake data. For instance, you could feed a GAN real bird noises, and it could produce its own realistic sounding bird noise. Because of this ability, GANs are incredibly useful in data-limited situations — if you have a dearth of rare brain tumor images, you could use GANs to generate realistic looking fake brain tumor MRIs. In fact, the technology is so promising to researchers that the Director of AI Research at Facebook called them the most exciting development in machine learning of the last decade.

As we mentioned above, Nvidia, along with the Mayo Clinic, and the MGH & BWH Center for Clinical Data Science, recently used GANs to create fake MRIs of brain tumors. The organizations said that GANs were a low-cost solution to the issue of imbalanced data sets due to a lack of pathologic findings, which tend to be rare. By training a GAN to generate synthetic abnormal MRIs, they were able to improve tumor segmentation and save doctors hours of time.

This wasn’t the first time that GANs have been used to generate medical images. For instance researchers presented GAN-generated synthetic images that were used to train a classification model for tissues recognition at the 2018 IEEE International Conference on Healthcare Informatics (ICHI). The tissue recognition accuracy was 98.83% — evidence of how AI trained on GAN-generated synthetic data can reach accuracy levels that are superior or equal to human accuracy.

Other researchers have used GANs to generate a synthetic head CT from a brain MRI, to augment data for liver lesion classification, and to generate synthetic retinal images.

Why Many Experts Believe the Reality Gap is Too Strong For Fake Data to Make a Difference

There are numerous challenges to working with GANs, and to say that they are an affordable alternative to accessing real data is not always accurate, according to multiple NewtonX experts.

GANs come with unique issues that do not apply to other types of machine learning. For one, the generator and discriminator can ‘forget’ strategies they used earlier in training, which can lead to the networks getting caught in a stable cycle with no improvement. The opposite can also happen: one network may overpower the other to that neither learns from the other anymore. Finally, the networks could experience ‘mode collapse,’ which occurs when the generator only learns a subset of realistic data (for instance, only learning to generate fat white cats instead of cats of all shapes and sizes).

These challenges are hardly insurmountable (as the successes above demonstrate) and there are numerous strategies for preventing these negative outcomes, including labeling training data and batch normalization. However, building a functional GAN for complicated AI medical tasks requires time, money, and talent. Multiple NewtonX experts cited talent shortage as a primary blocker for using this technology to further AI medical development: there simply aren’t enough people who can successfully build GANs to go around.

Additionally, training takes time, large networks, and parameter tuning before the networks can actually generate realistic synthetic data. Currently, many medical researchers say that it’s a toss up whether the investment in creating fake data is actually an improvement from the investment in cleaning and collecting real data. Ultimately, the place where GANs may prove most useful for medical AI is in areas where data isn’t just siloed, but is actually lacking and difficult to generate.



About Author

Germain Chastel is the CEO and Founder of NewtonX.

Comments are closed.