EMIXER: End-to-end Multimodal X-ray Generation via Self-supervision

Siddharth Biswal, Peiye Zhuang, Ayis Pyrros, Nasir Siddiqui, Sanmi Koyejo, Jimeng Sun
Proceedings of the 7th Machine Learning for Healthcare Conference, PMLR 182:297-324, 2022.

Abstract

Deep generative models have enabled the automated synthesis of high-quality data for diverse applications. However, the most effective generative models are specialized in data from a single domain (e.g., images or text). Real-world applications such as healthcare require multimodal data from multiple domains (e.g., both images and corresponding text), which are challenging to acquire due to limited availability and privacy concerns and are much harder to synthesize. To tackle this joint synthesis challenge, we propose an End-to-end MultImodal X-ray genERative model (EMIXER) for jointly synthesizing x-ray images and corresponding free-text reports, all conditional on diagnosis labels. EMIXER is a conditional generative adversarial model by 1) generating an image based on a label, 2) encoding the image to a hidden embedding, 3) producing the corresponding text via a hierarchical decoder from the image embedding, and 4) a joint discriminator for assessing both the image and the corresponding text. EMIXER also enables self-supervision to leverage a vast amount of unlabeled data. Extensive experiments with real X-ray reports data illustrate how data augmentation using synthesized multimodal samples can improve the performance of various supervised tasks, including COVID-19 X-ray classification with limited samples. Radiologists also confirm the quality of generated images and reports. We quantitatively show that EMIXER generated synthetic datasets can augment X-ray image classification, and report generation models to achieve 5.94% and 6.9% improvement on models trained only on real data samples. Overall, our results highlight the promise of generative models to overcome challenges in machine learning in healthcare.

Cite this Paper


BibTeX
@InProceedings{pmlr-v182-biswal22a, title = {EMIXER: End-to-end Multimodal X-ray Generation via Self-supervision}, author = {Biswal, Siddharth and Zhuang, Peiye and Pyrros, Ayis and Siddiqui, Nasir and Koyejo, Sanmi and Sun, Jimeng}, booktitle = {Proceedings of the 7th Machine Learning for Healthcare Conference}, pages = {297--324}, year = {2022}, editor = {Lipton, Zachary and Ranganath, Rajesh and Sendak, Mark and Sjoding, Michael and Yeung, Serena}, volume = {182}, series = {Proceedings of Machine Learning Research}, month = {05--06 Aug}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v182/biswal22a/biswal22a.pdf}, url = {https://proceedings.mlr.press/v182/biswal22a.html}, abstract = {Deep generative models have enabled the automated synthesis of high-quality data for diverse applications. However, the most effective generative models are specialized in data from a single domain (e.g., images or text). Real-world applications such as healthcare require multimodal data from multiple domains (e.g., both images and corresponding text), which are challenging to acquire due to limited availability and privacy concerns and are much harder to synthesize. To tackle this joint synthesis challenge, we propose an End-to-end MultImodal X-ray genERative model (EMIXER) for jointly synthesizing x-ray images and corresponding free-text reports, all conditional on diagnosis labels. EMIXER is a conditional generative adversarial model by 1) generating an image based on a label, 2) encoding the image to a hidden embedding, 3) producing the corresponding text via a hierarchical decoder from the image embedding, and 4) a joint discriminator for assessing both the image and the corresponding text. EMIXER also enables self-supervision to leverage a vast amount of unlabeled data. Extensive experiments with real X-ray reports data illustrate how data augmentation using synthesized multimodal samples can improve the performance of various supervised tasks, including COVID-19 X-ray classification with limited samples. Radiologists also confirm the quality of generated images and reports. We quantitatively show that EMIXER generated synthetic datasets can augment X-ray image classification, and report generation models to achieve 5.94% and 6.9% improvement on models trained only on real data samples. Overall, our results highlight the promise of generative models to overcome challenges in machine learning in healthcare.} }
Endnote
%0 Conference Paper %T EMIXER: End-to-end Multimodal X-ray Generation via Self-supervision %A Siddharth Biswal %A Peiye Zhuang %A Ayis Pyrros %A Nasir Siddiqui %A Sanmi Koyejo %A Jimeng Sun %B Proceedings of the 7th Machine Learning for Healthcare Conference %C Proceedings of Machine Learning Research %D 2022 %E Zachary Lipton %E Rajesh Ranganath %E Mark Sendak %E Michael Sjoding %E Serena Yeung %F pmlr-v182-biswal22a %I PMLR %P 297--324 %U https://proceedings.mlr.press/v182/biswal22a.html %V 182 %X Deep generative models have enabled the automated synthesis of high-quality data for diverse applications. However, the most effective generative models are specialized in data from a single domain (e.g., images or text). Real-world applications such as healthcare require multimodal data from multiple domains (e.g., both images and corresponding text), which are challenging to acquire due to limited availability and privacy concerns and are much harder to synthesize. To tackle this joint synthesis challenge, we propose an End-to-end MultImodal X-ray genERative model (EMIXER) for jointly synthesizing x-ray images and corresponding free-text reports, all conditional on diagnosis labels. EMIXER is a conditional generative adversarial model by 1) generating an image based on a label, 2) encoding the image to a hidden embedding, 3) producing the corresponding text via a hierarchical decoder from the image embedding, and 4) a joint discriminator for assessing both the image and the corresponding text. EMIXER also enables self-supervision to leverage a vast amount of unlabeled data. Extensive experiments with real X-ray reports data illustrate how data augmentation using synthesized multimodal samples can improve the performance of various supervised tasks, including COVID-19 X-ray classification with limited samples. Radiologists also confirm the quality of generated images and reports. We quantitatively show that EMIXER generated synthetic datasets can augment X-ray image classification, and report generation models to achieve 5.94% and 6.9% improvement on models trained only on real data samples. Overall, our results highlight the promise of generative models to overcome challenges in machine learning in healthcare.
APA
Biswal, S., Zhuang, P., Pyrros, A., Siddiqui, N., Koyejo, S. & Sun, J.. (2022). EMIXER: End-to-end Multimodal X-ray Generation via Self-supervision. Proceedings of the 7th Machine Learning for Healthcare Conference, in Proceedings of Machine Learning Research 182:297-324 Available from https://proceedings.mlr.press/v182/biswal22a.html.

Related Material