The OOD Blind Spot of Unsupervised Anomaly Detection

Matthäus Heer, Janis Postels, Xiaoran Chen, Ender Konukoglu, Shadi Albarqouni
Proceedings of the Fourth Conference on Medical Imaging with Deep Learning, PMLR 143:286-300, 2021.

Abstract

Deep unsupervised generative models are regarded as a promising alternative to supervised counterparts in the field of MRI-based lesion detection. They denote a principled approach for detecting unseen types of anomalies without relying on large amounts of expensive ground truth annotations. To this end, deep generative models are trained exclusively on data from healthy patients and detect lesions as Out-of-Distribution (OOD) data at test time (i.e. low likelihood). While this is a promising way of bypassing the need for costly annotations, this work demonstrates that it also renders this widely used unsupervised anomaly detection approach particularly vulnerable to non-lesion-based OOD data (e.g. data from different sensors). Since models are likely to be exposed to such OOD data in production, it is crucial to employ safety mechanisms to filter for such samples and run inference only on input for which the model is able to provide reliable results. We first show extensively that conventional, unsupervised anomaly detection mechanisms fail when being presented with true OOD data. Secondly, we apply prior knowledge to disentangle lesion-based OOD from their non-lesion-based counterparts.

Cite this Paper


BibTeX
@InProceedings{pmlr-v143-heer21a, title = {The {OOD} Blind Spot of Unsupervised Anomaly Detection}, author = {Heer, Matth{\"a}us and Postels, Janis and Chen, Xiaoran and Konukoglu, Ender and Albarqouni, Shadi}, booktitle = {Proceedings of the Fourth Conference on Medical Imaging with Deep Learning}, pages = {286--300}, year = {2021}, editor = {Heinrich, Mattias and Dou, Qi and de Bruijne, Marleen and Lellmann, Jan and Schläfer, Alexander and Ernst, Floris}, volume = {143}, series = {Proceedings of Machine Learning Research}, month = {07--09 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v143/heer21a/heer21a.pdf}, url = {https://proceedings.mlr.press/v143/heer21a.html}, abstract = {Deep unsupervised generative models are regarded as a promising alternative to supervised counterparts in the field of MRI-based lesion detection. They denote a principled approach for detecting unseen types of anomalies without relying on large amounts of expensive ground truth annotations. To this end, deep generative models are trained exclusively on data from healthy patients and detect lesions as Out-of-Distribution (OOD) data at test time (i.e. low likelihood). While this is a promising way of bypassing the need for costly annotations, this work demonstrates that it also renders this widely used unsupervised anomaly detection approach particularly vulnerable to non-lesion-based OOD data (e.g. data from different sensors). Since models are likely to be exposed to such OOD data in production, it is crucial to employ safety mechanisms to filter for such samples and run inference only on input for which the model is able to provide reliable results. We first show extensively that conventional, unsupervised anomaly detection mechanisms fail when being presented with true OOD data. Secondly, we apply prior knowledge to disentangle lesion-based OOD from their non-lesion-based counterparts.} }
Endnote
%0 Conference Paper %T The OOD Blind Spot of Unsupervised Anomaly Detection %A Matthäus Heer %A Janis Postels %A Xiaoran Chen %A Ender Konukoglu %A Shadi Albarqouni %B Proceedings of the Fourth Conference on Medical Imaging with Deep Learning %C Proceedings of Machine Learning Research %D 2021 %E Mattias Heinrich %E Qi Dou %E Marleen de Bruijne %E Jan Lellmann %E Alexander Schläfer %E Floris Ernst %F pmlr-v143-heer21a %I PMLR %P 286--300 %U https://proceedings.mlr.press/v143/heer21a.html %V 143 %X Deep unsupervised generative models are regarded as a promising alternative to supervised counterparts in the field of MRI-based lesion detection. They denote a principled approach for detecting unseen types of anomalies without relying on large amounts of expensive ground truth annotations. To this end, deep generative models are trained exclusively on data from healthy patients and detect lesions as Out-of-Distribution (OOD) data at test time (i.e. low likelihood). While this is a promising way of bypassing the need for costly annotations, this work demonstrates that it also renders this widely used unsupervised anomaly detection approach particularly vulnerable to non-lesion-based OOD data (e.g. data from different sensors). Since models are likely to be exposed to such OOD data in production, it is crucial to employ safety mechanisms to filter for such samples and run inference only on input for which the model is able to provide reliable results. We first show extensively that conventional, unsupervised anomaly detection mechanisms fail when being presented with true OOD data. Secondly, we apply prior knowledge to disentangle lesion-based OOD from their non-lesion-based counterparts.
APA
Heer, M., Postels, J., Chen, X., Konukoglu, E. & Albarqouni, S.. (2021). The OOD Blind Spot of Unsupervised Anomaly Detection. Proceedings of the Fourth Conference on Medical Imaging with Deep Learning, in Proceedings of Machine Learning Research 143:286-300 Available from https://proceedings.mlr.press/v143/heer21a.html.

Related Material