[edit]
EMIXER: End-to-end Multimodal X-ray Generation via Self-supervision
Proceedings of the 7th Machine Learning for Healthcare Conference, PMLR 182:297-324, 2022.
Abstract
Deep generative models have enabled the automated synthesis of high-quality data for diverse applications. However, the most effective generative models are specialized in data from a single domain (e.g., images or text). Real-world applications such as healthcare require multimodal data from multiple domains (e.g., both images and corresponding text), which are challenging to acquire due to limited availability and privacy concerns and are much harder to synthesize. To tackle this joint synthesis challenge, we propose an End-to-end MultImodal X-ray genERative model (EMIXER) for jointly synthesizing x-ray images and corresponding free-text reports, all conditional on diagnosis labels. EMIXER is a conditional generative adversarial model by 1) generating an image based on a label, 2) encoding the image to a hidden embedding, 3) producing the corresponding text via a hierarchical decoder from the image embedding, and 4) a joint discriminator for assessing both the image and the corresponding text. EMIXER also enables self-supervision to leverage a vast amount of unlabeled data. Extensive experiments with real X-ray reports data illustrate how data augmentation using synthesized multimodal samples can improve the performance of various supervised tasks, including COVID-19 X-ray classification with limited samples. Radiologists also confirm the quality of generated images and reports. We quantitatively show that EMIXER generated synthetic datasets can augment X-ray image classification, and report generation models to achieve 5.94% and 6.9% improvement on models trained only on real data samples. Overall, our results highlight the promise of generative models to overcome challenges in machine learning in healthcare.