Learning Shape Reconstruction from Sparse Measurements with Neural Implicit Functions

Tamaz Amiranashvili, David Lüdke, Hongwei Bran Li, Bjoern Menze, Stefan Zachow
Proceedings of The 5th International Conference on Medical Imaging with Deep Learning, PMLR 172:22-34, 2022.

Abstract

Reconstructing anatomical shapes from sparse or partial measurements relies on prior knowledge of shape variations that occur within a given population. Such shape priors are learned from example shapes, obtained by segmenting volumetric medical images. For existing models, the resolution of a learned shape prior is limited to the resolution of the training data. However, in clinical practice, volumetric images are often acquired with highly anisotropic voxel sizes, e.g. to reduce image acquisition time in MRI or radiation exposure in CT imaging. The missing shape information between the slices prohibits existing methods to learn a high-resolution shape prior. We introduce a method for high-resolution shape reconstruction from sparse measurements without relying on high-resolution ground truth for training. Our method is based on neural implicit shape representations and learns a continuous shape prior only from highly anisotropic segmentations. Furthermore, it is able to learn from shapes with a varying field of view and can reconstruct from various sparse input configurations. We demonstrate its effectiveness on two anatomical structures: vertebra and distal femur, and successfully reconstruct high-resolution shapes from sparse segmentations, using as few as three orthogonal slices.

Cite this Paper


BibTeX
@InProceedings{pmlr-v172-amiranashvili22a, title = {Learning Shape Reconstruction from Sparse Measurements with Neural Implicit Functions}, author = {Amiranashvili, Tamaz and L{\"u}dke, David and Li, Hongwei Bran and Menze, Bjoern and Zachow, Stefan}, booktitle = {Proceedings of The 5th International Conference on Medical Imaging with Deep Learning}, pages = {22--34}, year = {2022}, editor = {Konukoglu, Ender and Menze, Bjoern and Venkataraman, Archana and Baumgartner, Christian and Dou, Qi and Albarqouni, Shadi}, volume = {172}, series = {Proceedings of Machine Learning Research}, month = {06--08 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v172/amiranashvili22a/amiranashvili22a.pdf}, url = {https://proceedings.mlr.press/v172/amiranashvili22a.html}, abstract = {Reconstructing anatomical shapes from sparse or partial measurements relies on prior knowledge of shape variations that occur within a given population. Such shape priors are learned from example shapes, obtained by segmenting volumetric medical images. For existing models, the resolution of a learned shape prior is limited to the resolution of the training data. However, in clinical practice, volumetric images are often acquired with highly anisotropic voxel sizes, e.g. to reduce image acquisition time in MRI or radiation exposure in CT imaging. The missing shape information between the slices prohibits existing methods to learn a high-resolution shape prior. We introduce a method for high-resolution shape reconstruction from sparse measurements without relying on high-resolution ground truth for training. Our method is based on neural implicit shape representations and learns a continuous shape prior only from highly anisotropic segmentations. Furthermore, it is able to learn from shapes with a varying field of view and can reconstruct from various sparse input configurations. We demonstrate its effectiveness on two anatomical structures: vertebra and distal femur, and successfully reconstruct high-resolution shapes from sparse segmentations, using as few as three orthogonal slices.} }
Endnote
%0 Conference Paper %T Learning Shape Reconstruction from Sparse Measurements with Neural Implicit Functions %A Tamaz Amiranashvili %A David Lüdke %A Hongwei Bran Li %A Bjoern Menze %A Stefan Zachow %B Proceedings of The 5th International Conference on Medical Imaging with Deep Learning %C Proceedings of Machine Learning Research %D 2022 %E Ender Konukoglu %E Bjoern Menze %E Archana Venkataraman %E Christian Baumgartner %E Qi Dou %E Shadi Albarqouni %F pmlr-v172-amiranashvili22a %I PMLR %P 22--34 %U https://proceedings.mlr.press/v172/amiranashvili22a.html %V 172 %X Reconstructing anatomical shapes from sparse or partial measurements relies on prior knowledge of shape variations that occur within a given population. Such shape priors are learned from example shapes, obtained by segmenting volumetric medical images. For existing models, the resolution of a learned shape prior is limited to the resolution of the training data. However, in clinical practice, volumetric images are often acquired with highly anisotropic voxel sizes, e.g. to reduce image acquisition time in MRI or radiation exposure in CT imaging. The missing shape information between the slices prohibits existing methods to learn a high-resolution shape prior. We introduce a method for high-resolution shape reconstruction from sparse measurements without relying on high-resolution ground truth for training. Our method is based on neural implicit shape representations and learns a continuous shape prior only from highly anisotropic segmentations. Furthermore, it is able to learn from shapes with a varying field of view and can reconstruct from various sparse input configurations. We demonstrate its effectiveness on two anatomical structures: vertebra and distal femur, and successfully reconstruct high-resolution shapes from sparse segmentations, using as few as three orthogonal slices.
APA
Amiranashvili, T., Lüdke, D., Li, H.B., Menze, B. & Zachow, S.. (2022). Learning Shape Reconstruction from Sparse Measurements with Neural Implicit Functions. Proceedings of The 5th International Conference on Medical Imaging with Deep Learning, in Proceedings of Machine Learning Research 172:22-34 Available from https://proceedings.mlr.press/v172/amiranashvili22a.html.

Related Material