Modeling the acquisition shift between axial and sagittal MRI for diffusion superresolution to enable axial spine segmentation

Robert Graf, Hendrik Möller, Julian McGinnis, Sebastian Rühling, Maren Weihrauch, Matan Atad, Suprosanna Shit, Bjoern Menze, Mark Mühlau, Johannes C. Paetzold, Daniel Rueckert, Jan Kirschke
Proceedings of The 7nd International Conference on Medical Imaging with Deep Learning, PMLR 250:520-537, 2024.

Abstract

Spine MRIs are usually acquired in highly anisotropic 2D axial or sagittal slices. Vertebra structures are not fully resolved in these images, and multi-image superresolution by aligning scans to pair them is difficult due to partial volume effects and inter-vertebral movement during acquisition. Hence, we propose an unpaired inpainting superresolution algorithm that extrapolates the missing spine structures. We generate synthetic training pairs by multiple degradation functions that model the data shift and acquisition errors between sagittal slices and sagittal views of axial images. Our method employs modeling of the k-space point spread function and the interslice gap. Further, we imitate different MR acquisition challenges like histogram shifts, bias fields, interlace movement artifacts, Gaussian noise, and blur. This enables the training of diffusion-based superresolution models on scaling factors larger than 6$\times$ without real paired data. The low z-resolution in axial images prevents existing approaches from separating individual vertebrae instances. By applying this superresolution model to the z-dimension, we can generate images that allow a pre-trained segmentation model to distinguish between vertebrae and enable automatic segmentation and processing of axial images. We experimentally benchmark our method and show that diffusion-based superresolution outperforms state-of-the-art super-resolution models.

Cite this Paper


BibTeX
@InProceedings{pmlr-v250-graf24a, title = {Modeling the acquisition shift between axial and sagittal MRI for diffusion superresolution to enable axial spine segmentation}, author = {Graf, Robert and M\"oller, Hendrik and McGinnis, Julian and R\"uhling, Sebastian and Weihrauch, Maren and Atad, Matan and Shit, Suprosanna and Menze, Bjoern and M\"uhlau, Mark and Paetzold, Johannes C. and Rueckert, Daniel and Kirschke, Jan}, booktitle = {Proceedings of The 7nd International Conference on Medical Imaging with Deep Learning}, pages = {520--537}, year = {2024}, editor = {Burgos, Ninon and Petitjean, Caroline and Vakalopoulou, Maria and Christodoulidis, Stergios and Coupe, Pierrick and Delingette, Hervé and Lartizien, Carole and Mateus, Diana}, volume = {250}, series = {Proceedings of Machine Learning Research}, month = {03--05 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v250/main/assets/graf24a/graf24a.pdf}, url = {https://proceedings.mlr.press/v250/graf24a.html}, abstract = {Spine MRIs are usually acquired in highly anisotropic 2D axial or sagittal slices. Vertebra structures are not fully resolved in these images, and multi-image superresolution by aligning scans to pair them is difficult due to partial volume effects and inter-vertebral movement during acquisition. Hence, we propose an unpaired inpainting superresolution algorithm that extrapolates the missing spine structures. We generate synthetic training pairs by multiple degradation functions that model the data shift and acquisition errors between sagittal slices and sagittal views of axial images. Our method employs modeling of the k-space point spread function and the interslice gap. Further, we imitate different MR acquisition challenges like histogram shifts, bias fields, interlace movement artifacts, Gaussian noise, and blur. This enables the training of diffusion-based superresolution models on scaling factors larger than 6$\times$ without real paired data. The low z-resolution in axial images prevents existing approaches from separating individual vertebrae instances. By applying this superresolution model to the z-dimension, we can generate images that allow a pre-trained segmentation model to distinguish between vertebrae and enable automatic segmentation and processing of axial images. We experimentally benchmark our method and show that diffusion-based superresolution outperforms state-of-the-art super-resolution models.} }
Endnote
%0 Conference Paper %T Modeling the acquisition shift between axial and sagittal MRI for diffusion superresolution to enable axial spine segmentation %A Robert Graf %A Hendrik Möller %A Julian McGinnis %A Sebastian Rühling %A Maren Weihrauch %A Matan Atad %A Suprosanna Shit %A Bjoern Menze %A Mark Mühlau %A Johannes C. Paetzold %A Daniel Rueckert %A Jan Kirschke %B Proceedings of The 7nd International Conference on Medical Imaging with Deep Learning %C Proceedings of Machine Learning Research %D 2024 %E Ninon Burgos %E Caroline Petitjean %E Maria Vakalopoulou %E Stergios Christodoulidis %E Pierrick Coupe %E Hervé Delingette %E Carole Lartizien %E Diana Mateus %F pmlr-v250-graf24a %I PMLR %P 520--537 %U https://proceedings.mlr.press/v250/graf24a.html %V 250 %X Spine MRIs are usually acquired in highly anisotropic 2D axial or sagittal slices. Vertebra structures are not fully resolved in these images, and multi-image superresolution by aligning scans to pair them is difficult due to partial volume effects and inter-vertebral movement during acquisition. Hence, we propose an unpaired inpainting superresolution algorithm that extrapolates the missing spine structures. We generate synthetic training pairs by multiple degradation functions that model the data shift and acquisition errors between sagittal slices and sagittal views of axial images. Our method employs modeling of the k-space point spread function and the interslice gap. Further, we imitate different MR acquisition challenges like histogram shifts, bias fields, interlace movement artifacts, Gaussian noise, and blur. This enables the training of diffusion-based superresolution models on scaling factors larger than 6$\times$ without real paired data. The low z-resolution in axial images prevents existing approaches from separating individual vertebrae instances. By applying this superresolution model to the z-dimension, we can generate images that allow a pre-trained segmentation model to distinguish between vertebrae and enable automatic segmentation and processing of axial images. We experimentally benchmark our method and show that diffusion-based superresolution outperforms state-of-the-art super-resolution models.
APA
Graf, R., Möller, H., McGinnis, J., Rühling, S., Weihrauch, M., Atad, M., Shit, S., Menze, B., Mühlau, M., Paetzold, J.C., Rueckert, D. & Kirschke, J.. (2024). Modeling the acquisition shift between axial and sagittal MRI for diffusion superresolution to enable axial spine segmentation. Proceedings of The 7nd International Conference on Medical Imaging with Deep Learning, in Proceedings of Machine Learning Research 250:520-537 Available from https://proceedings.mlr.press/v250/graf24a.html.

Related Material