Image Synthesis with a Convolutional Capsule Generative Adversarial Network

Cher Bass, Tianhong Dai, Benjamin Billot, Kai Arulkumaran, Antonia Creswell, Claudia Clopath, Vincenzo De Paola, Anil Anthony Bharath
Proceedings of The 2nd International Conference on Medical Imaging with Deep Learning, PMLR 102:39-62, 2019.

Abstract

Machine learning for biomedical imaging often suffers from a lack of labelled training data. One solution is to use generative models to synthesise more data. To this end, we introduce CapsPix2Pix, which combines convolutional capsules with the \texttt{pix2pix} framework, to synthesise images conditioned on class segmentation labels. We apply our approach to a new biomedical dataset of cortical axons imaged by two-photon microscopy, as a method of data augmentation for small datasets. We evaluate performance both qualitatively and quantitatively. Quantitative evaluation is performed by using image data generated by either CapsPix2Pix or \texttt{pix2pix} to train a U-net on a segmentation task, then testing on real microscopy data. Our method quantitatively performs as well as \texttt{pix2pix}, with an order of magnitude fewer parameters. Additionally, CapsPix2Pix is far more capable at synthesising images of different appearance, but the same underlying geometry. Finally, qualitative analysis of the features learned by CapsPix2Pix suggests that individual capsules capture diverse and often semantically meaningful groups of features, covering structures such as synapses, axons and noise.

Cite this Paper


BibTeX
@InProceedings{pmlr-v102-bass19a, title = {Image Synthesis with a Convolutional Capsule Generative Adversarial Network}, author = {Bass, Cher and Dai, Tianhong and Billot, Benjamin and Arulkumaran, Kai and Creswell, Antonia and Clopath, Claudia and {De Paola}, Vincenzo and Bharath, Anil Anthony}, booktitle = {Proceedings of The 2nd International Conference on Medical Imaging with Deep Learning}, pages = {39--62}, year = {2019}, editor = {Cardoso, M. Jorge and Feragen, Aasa and Glocker, Ben and Konukoglu, Ender and Oguz, Ipek and Unal, Gozde and Vercauteren, Tom}, volume = {102}, series = {Proceedings of Machine Learning Research}, month = {08--10 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v102/bass19a/bass19a.pdf}, url = {https://proceedings.mlr.press/v102/bass19a.html}, abstract = {Machine learning for biomedical imaging often suffers from a lack of labelled training data. One solution is to use generative models to synthesise more data. To this end, we introduce CapsPix2Pix, which combines convolutional capsules with the \texttt{pix2pix} framework, to synthesise images conditioned on class segmentation labels. We apply our approach to a new biomedical dataset of cortical axons imaged by two-photon microscopy, as a method of data augmentation for small datasets. We evaluate performance both qualitatively and quantitatively. Quantitative evaluation is performed by using image data generated by either CapsPix2Pix or \texttt{pix2pix} to train a U-net on a segmentation task, then testing on real microscopy data. Our method quantitatively performs as well as \texttt{pix2pix}, with an order of magnitude fewer parameters. Additionally, CapsPix2Pix is far more capable at synthesising images of different appearance, but the same underlying geometry. Finally, qualitative analysis of the features learned by CapsPix2Pix suggests that individual capsules capture diverse and often semantically meaningful groups of features, covering structures such as synapses, axons and noise.} }
Endnote
%0 Conference Paper %T Image Synthesis with a Convolutional Capsule Generative Adversarial Network %A Cher Bass %A Tianhong Dai %A Benjamin Billot %A Kai Arulkumaran %A Antonia Creswell %A Claudia Clopath %A Vincenzo De Paola %A Anil Anthony Bharath %B Proceedings of The 2nd International Conference on Medical Imaging with Deep Learning %C Proceedings of Machine Learning Research %D 2019 %E M. Jorge Cardoso %E Aasa Feragen %E Ben Glocker %E Ender Konukoglu %E Ipek Oguz %E Gozde Unal %E Tom Vercauteren %F pmlr-v102-bass19a %I PMLR %P 39--62 %U https://proceedings.mlr.press/v102/bass19a.html %V 102 %X Machine learning for biomedical imaging often suffers from a lack of labelled training data. One solution is to use generative models to synthesise more data. To this end, we introduce CapsPix2Pix, which combines convolutional capsules with the \texttt{pix2pix} framework, to synthesise images conditioned on class segmentation labels. We apply our approach to a new biomedical dataset of cortical axons imaged by two-photon microscopy, as a method of data augmentation for small datasets. We evaluate performance both qualitatively and quantitatively. Quantitative evaluation is performed by using image data generated by either CapsPix2Pix or \texttt{pix2pix} to train a U-net on a segmentation task, then testing on real microscopy data. Our method quantitatively performs as well as \texttt{pix2pix}, with an order of magnitude fewer parameters. Additionally, CapsPix2Pix is far more capable at synthesising images of different appearance, but the same underlying geometry. Finally, qualitative analysis of the features learned by CapsPix2Pix suggests that individual capsules capture diverse and often semantically meaningful groups of features, covering structures such as synapses, axons and noise.
APA
Bass, C., Dai, T., Billot, B., Arulkumaran, K., Creswell, A., Clopath, C., De Paola, V. & Bharath, A.A.. (2019). Image Synthesis with a Convolutional Capsule Generative Adversarial Network. Proceedings of The 2nd International Conference on Medical Imaging with Deep Learning, in Proceedings of Machine Learning Research 102:39-62 Available from https://proceedings.mlr.press/v102/bass19a.html.

Related Material