[edit]
Red-GAN: Attacking class imbalance via conditioned generation. Yet another medical imaging perspective.
Proceedings of the Third Conference on Medical Imaging with Deep Learning, PMLR 121:655-668, 2020.
Abstract
Exploiting learning algorithms under scarce data regimes is a limitation and a reality of the medical imaging field. In an attempt to mitigate the problem, we propose a data augmentation protocol based on generative adversarial networks. We condition the networks at a pixel-level (segmentation mask) and at a global-level information (acquisition environment or lesion type). Such conditioning provides immediate access to the image-label pairs while controlling global class specific appearance of the synthesized images. To stimulate synthesis of the features relevant for the segmentation task, an additional passive player in a form of segmentor is introduced into the the adversarial game. We validate the approach on two medical datasets: BraTS, ISIC. By controlling the class distribution through injection of synthetic images into the training set we achieve control over the accuracy levels of the datasets’ classes. The code is available at https://github.com/IvanEz/Red-GAN.