Model-agnostic out-of-distribution detection using combined statistical tests

Federico Bergamin, Pierre-Alexandre Mattei, Jakob Drachmann Havtorn, Hugo Sénétaire, Hugo Schmutz, Lars Maaløe, Soren Hauberg, Jes Frellsen
Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, PMLR 151:10753-10776, 2022.

Abstract

We present simple methods for out-of-distribution detection using a trained generative model. These techniques, based on classical statistical tests, are model-agnostic in the sense that they can be applied to any differentiable generative model. The idea is to combine a classical parametric test (Rao’s score test) with the recently introduced typicality test. These two test statistics are both theoretically well-founded and exploit different sources of information based on the likelihood for the typicality test and its gradient for the score test. We show that combining them using Fisher’s method overall leads to a more accurate out-of-distribution test. We also discuss the benefits of casting out-of-distribution detection as a statistical testing problem, noting in particular that false positive rate control can be valuable for practical out-of-distribution detection. Despite their simplicity and generality, these methods can be competitive with model-specific out-of-distribution detection algorithms without any assumptions on the out-distribution.

Cite this Paper


BibTeX
@InProceedings{pmlr-v151-bergamin22a, title = { Model-agnostic out-of-distribution detection using combined statistical tests }, author = {Bergamin, Federico and Mattei, Pierre-Alexandre and Drachmann Havtorn, Jakob and S\'en\'etaire, Hugo and Schmutz, Hugo and Maal{\o}e, Lars and Hauberg, Soren and Frellsen, Jes}, booktitle = {Proceedings of The 25th International Conference on Artificial Intelligence and Statistics}, pages = {10753--10776}, year = {2022}, editor = {Camps-Valls, Gustau and Ruiz, Francisco J. R. and Valera, Isabel}, volume = {151}, series = {Proceedings of Machine Learning Research}, month = {28--30 Mar}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v151/bergamin22a/bergamin22a.pdf}, url = {https://proceedings.mlr.press/v151/bergamin22a.html}, abstract = { We present simple methods for out-of-distribution detection using a trained generative model. These techniques, based on classical statistical tests, are model-agnostic in the sense that they can be applied to any differentiable generative model. The idea is to combine a classical parametric test (Rao’s score test) with the recently introduced typicality test. These two test statistics are both theoretically well-founded and exploit different sources of information based on the likelihood for the typicality test and its gradient for the score test. We show that combining them using Fisher’s method overall leads to a more accurate out-of-distribution test. We also discuss the benefits of casting out-of-distribution detection as a statistical testing problem, noting in particular that false positive rate control can be valuable for practical out-of-distribution detection. Despite their simplicity and generality, these methods can be competitive with model-specific out-of-distribution detection algorithms without any assumptions on the out-distribution. } }
Endnote
%0 Conference Paper %T Model-agnostic out-of-distribution detection using combined statistical tests %A Federico Bergamin %A Pierre-Alexandre Mattei %A Jakob Drachmann Havtorn %A Hugo Sénétaire %A Hugo Schmutz %A Lars Maaløe %A Soren Hauberg %A Jes Frellsen %B Proceedings of The 25th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2022 %E Gustau Camps-Valls %E Francisco J. R. Ruiz %E Isabel Valera %F pmlr-v151-bergamin22a %I PMLR %P 10753--10776 %U https://proceedings.mlr.press/v151/bergamin22a.html %V 151 %X We present simple methods for out-of-distribution detection using a trained generative model. These techniques, based on classical statistical tests, are model-agnostic in the sense that they can be applied to any differentiable generative model. The idea is to combine a classical parametric test (Rao’s score test) with the recently introduced typicality test. These two test statistics are both theoretically well-founded and exploit different sources of information based on the likelihood for the typicality test and its gradient for the score test. We show that combining them using Fisher’s method overall leads to a more accurate out-of-distribution test. We also discuss the benefits of casting out-of-distribution detection as a statistical testing problem, noting in particular that false positive rate control can be valuable for practical out-of-distribution detection. Despite their simplicity and generality, these methods can be competitive with model-specific out-of-distribution detection algorithms without any assumptions on the out-distribution.
APA
Bergamin, F., Mattei, P., Drachmann Havtorn, J., Sénétaire, H., Schmutz, H., Maaløe, L., Hauberg, S. & Frellsen, J.. (2022). Model-agnostic out-of-distribution detection using combined statistical tests . Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 151:10753-10776 Available from https://proceedings.mlr.press/v151/bergamin22a.html.

Related Material