Transformer-based out-of-distribution detection for clinically safe segmentation

Mark S Graham, Petru-Daniel Tudosiu, Paul Wright, Walter Hugo Lopez Pinaya, Jean-Marie U-King-Im, Yee H Mah, James T Teo, Rolf Jager, David Werring, Parashkev Nachev, Sebastien Ourselin, M. Jorge Cardoso
Proceedings of The 5th International Conference on Medical Imaging with Deep Learning, PMLR 172:457-476, 2022.

Abstract

In a clinical setting it is essential that deployed image processing systems are robust to the full range of inputs they might encounter and, in particular, do not make confidently wrong predictions. The most popular approach to safe processing is to train networks that can provide a measure of their uncertainty, but these tend to fail for inputs that are far outside the training data distribution. Recently, generative modelling approaches have been proposed as an alternative; these can quantify the likelihood of a data sample explicitly, filtering out any out-of-distribution (OOD) samples before further processing is performed. In this work, we focus on image segmentation and evaluate several approaches to network uncertainty in the far-OOD and near-OOD cases for the task of segmenting haemorrhages in head CTs. We find all of these approaches are unsuitable for safe segmentation as they provide confidently wrong predictions when operating OOD. We propose performing full 3D OOD detection using a VQ-GAN to provide a compressed latent representation of the image and a transformer to estimate the data likelihood. Our approach successfully identifies images in both the far- and near-OOD cases. We find a strong relationship between image likelihood and the quality of a model’s segmentation, making this approach viable for filtering images unsuitable for segmentation. To our knowledge, this is the first time transformers have been applied to perform OOD detection on 3D image data.

Cite this Paper


BibTeX
@InProceedings{pmlr-v172-graham22a, title = {Transformer-based out-of-distribution detection for clinically safe segmentation}, author = {Graham, Mark S and Tudosiu, Petru-Daniel and Wright, Paul and Pinaya, Walter Hugo Lopez and U-King-Im, Jean-Marie and Mah, Yee H and Teo, James T and Jager, Rolf and Werring, David and Nachev, Parashkev and Ourselin, Sebastien and Cardoso, M. Jorge}, booktitle = {Proceedings of The 5th International Conference on Medical Imaging with Deep Learning}, pages = {457--476}, year = {2022}, editor = {Konukoglu, Ender and Menze, Bjoern and Venkataraman, Archana and Baumgartner, Christian and Dou, Qi and Albarqouni, Shadi}, volume = {172}, series = {Proceedings of Machine Learning Research}, month = {06--08 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v172/graham22a/graham22a.pdf}, url = {https://proceedings.mlr.press/v172/graham22a.html}, abstract = {In a clinical setting it is essential that deployed image processing systems are robust to the full range of inputs they might encounter and, in particular, do not make confidently wrong predictions. The most popular approach to safe processing is to train networks that can provide a measure of their uncertainty, but these tend to fail for inputs that are far outside the training data distribution. Recently, generative modelling approaches have been proposed as an alternative; these can quantify the likelihood of a data sample explicitly, filtering out any out-of-distribution (OOD) samples before further processing is performed. In this work, we focus on image segmentation and evaluate several approaches to network uncertainty in the far-OOD and near-OOD cases for the task of segmenting haemorrhages in head CTs. We find all of these approaches are unsuitable for safe segmentation as they provide confidently wrong predictions when operating OOD. We propose performing full 3D OOD detection using a VQ-GAN to provide a compressed latent representation of the image and a transformer to estimate the data likelihood. Our approach successfully identifies images in both the far- and near-OOD cases. We find a strong relationship between image likelihood and the quality of a model’s segmentation, making this approach viable for filtering images unsuitable for segmentation. To our knowledge, this is the first time transformers have been applied to perform OOD detection on 3D image data.} }
Endnote
%0 Conference Paper %T Transformer-based out-of-distribution detection for clinically safe segmentation %A Mark S Graham %A Petru-Daniel Tudosiu %A Paul Wright %A Walter Hugo Lopez Pinaya %A Jean-Marie U-King-Im %A Yee H Mah %A James T Teo %A Rolf Jager %A David Werring %A Parashkev Nachev %A Sebastien Ourselin %A M. Jorge Cardoso %B Proceedings of The 5th International Conference on Medical Imaging with Deep Learning %C Proceedings of Machine Learning Research %D 2022 %E Ender Konukoglu %E Bjoern Menze %E Archana Venkataraman %E Christian Baumgartner %E Qi Dou %E Shadi Albarqouni %F pmlr-v172-graham22a %I PMLR %P 457--476 %U https://proceedings.mlr.press/v172/graham22a.html %V 172 %X In a clinical setting it is essential that deployed image processing systems are robust to the full range of inputs they might encounter and, in particular, do not make confidently wrong predictions. The most popular approach to safe processing is to train networks that can provide a measure of their uncertainty, but these tend to fail for inputs that are far outside the training data distribution. Recently, generative modelling approaches have been proposed as an alternative; these can quantify the likelihood of a data sample explicitly, filtering out any out-of-distribution (OOD) samples before further processing is performed. In this work, we focus on image segmentation and evaluate several approaches to network uncertainty in the far-OOD and near-OOD cases for the task of segmenting haemorrhages in head CTs. We find all of these approaches are unsuitable for safe segmentation as they provide confidently wrong predictions when operating OOD. We propose performing full 3D OOD detection using a VQ-GAN to provide a compressed latent representation of the image and a transformer to estimate the data likelihood. Our approach successfully identifies images in both the far- and near-OOD cases. We find a strong relationship between image likelihood and the quality of a model’s segmentation, making this approach viable for filtering images unsuitable for segmentation. To our knowledge, this is the first time transformers have been applied to perform OOD detection on 3D image data.
APA
Graham, M.S., Tudosiu, P., Wright, P., Pinaya, W.H.L., U-King-Im, J., Mah, Y.H., Teo, J.T., Jager, R., Werring, D., Nachev, P., Ourselin, S. & Cardoso, M.J.. (2022). Transformer-based out-of-distribution detection for clinically safe segmentation. Proceedings of The 5th International Conference on Medical Imaging with Deep Learning, in Proceedings of Machine Learning Research 172:457-476 Available from https://proceedings.mlr.press/v172/graham22a.html.

Related Material