Capturing Single-Cell Phenotypic Variation via Unsupervised Representation Learning

Maxime W. Lafarge, Juan C. Caicedo, Anne E. Carpenter, Josien P.W. Pluim, Shantanu Singh, Mitko Veta
Proceedings of The 2nd International Conference on Medical Imaging with Deep Learning, PMLR 102:315-325, 2019.

Abstract

We propose a novel variational autoencoder (VAE) framework for learning representations of cell images for the domain of image-based profiling, important for new therapeutic discovery. Previously, generative adversarial network-based (GAN) approaches were proposed to enable biologists to visualize structural variations in cells that drive differences in populations. However, while the images were realistic, they did not provide direct reconstructions from representations, and their performance in downstream analysis was poor. We address these limitations in our approach by adding an adversarial-driven similarity constraint applied to the standard VAE framework, and a progressive training procedure that allows higher quality reconstructions than standard VAE’s. The proposed models improve classification accuracy by 22% (to 90%) compared to the best reported GAN model, making it competitive with other models that have higher quality representations, but lack the ability to synthesize images. This provides researchers a new tool to match cellular phenotypes effectively, and also to gain better insight into cellular structure variations that are driving differences between populations of cells.

Cite this Paper


BibTeX
@InProceedings{pmlr-v102-lafarge19a, title = {Capturing Single-Cell Phenotypic Variation via Unsupervised Representation Learning}, author = {Lafarge, Maxime W. and Caicedo, Juan C. and Carpenter, Anne E. and Pluim, Josien P.W. and Singh, Shantanu and Veta, Mitko}, booktitle = {Proceedings of The 2nd International Conference on Medical Imaging with Deep Learning}, pages = {315--325}, year = {2019}, editor = {Cardoso, M. Jorge and Feragen, Aasa and Glocker, Ben and Konukoglu, Ender and Oguz, Ipek and Unal, Gozde and Vercauteren, Tom}, volume = {102}, series = {Proceedings of Machine Learning Research}, month = {08--10 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v102/lafarge19a/lafarge19a.pdf}, url = {https://proceedings.mlr.press/v102/lafarge19a.html}, abstract = {We propose a novel variational autoencoder (VAE) framework for learning representations of cell images for the domain of image-based profiling, important for new therapeutic discovery. Previously, generative adversarial network-based (GAN) approaches were proposed to enable biologists to visualize structural variations in cells that drive differences in populations. However, while the images were realistic, they did not provide direct reconstructions from representations, and their performance in downstream analysis was poor. We address these limitations in our approach by adding an adversarial-driven similarity constraint applied to the standard VAE framework, and a progressive training procedure that allows higher quality reconstructions than standard VAE’s. The proposed models improve classification accuracy by 22% (to 90%) compared to the best reported GAN model, making it competitive with other models that have higher quality representations, but lack the ability to synthesize images. This provides researchers a new tool to match cellular phenotypes effectively, and also to gain better insight into cellular structure variations that are driving differences between populations of cells.} }
Endnote
%0 Conference Paper %T Capturing Single-Cell Phenotypic Variation via Unsupervised Representation Learning %A Maxime W. Lafarge %A Juan C. Caicedo %A Anne E. Carpenter %A Josien P.W. Pluim %A Shantanu Singh %A Mitko Veta %B Proceedings of The 2nd International Conference on Medical Imaging with Deep Learning %C Proceedings of Machine Learning Research %D 2019 %E M. Jorge Cardoso %E Aasa Feragen %E Ben Glocker %E Ender Konukoglu %E Ipek Oguz %E Gozde Unal %E Tom Vercauteren %F pmlr-v102-lafarge19a %I PMLR %P 315--325 %U https://proceedings.mlr.press/v102/lafarge19a.html %V 102 %X We propose a novel variational autoencoder (VAE) framework for learning representations of cell images for the domain of image-based profiling, important for new therapeutic discovery. Previously, generative adversarial network-based (GAN) approaches were proposed to enable biologists to visualize structural variations in cells that drive differences in populations. However, while the images were realistic, they did not provide direct reconstructions from representations, and their performance in downstream analysis was poor. We address these limitations in our approach by adding an adversarial-driven similarity constraint applied to the standard VAE framework, and a progressive training procedure that allows higher quality reconstructions than standard VAE’s. The proposed models improve classification accuracy by 22% (to 90%) compared to the best reported GAN model, making it competitive with other models that have higher quality representations, but lack the ability to synthesize images. This provides researchers a new tool to match cellular phenotypes effectively, and also to gain better insight into cellular structure variations that are driving differences between populations of cells.
APA
Lafarge, M.W., Caicedo, J.C., Carpenter, A.E., Pluim, J.P., Singh, S. & Veta, M.. (2019). Capturing Single-Cell Phenotypic Variation via Unsupervised Representation Learning. Proceedings of The 2nd International Conference on Medical Imaging with Deep Learning, in Proceedings of Machine Learning Research 102:315-325 Available from https://proceedings.mlr.press/v102/lafarge19a.html.

Related Material