Out of Distribution Detection via Neural Network Anchoring

Rushil Anirudh, Jayaraman J. Thiagarajan
Proceedings of The 14th Asian Conference on Machine Learning, PMLR 189:32-47, 2023.

Abstract

Our goal in this paper is to exploit heteroscedastic temperature scaling as a calibration strategy for out of distribution (OOD) detection. Heteroscedasticity here refers to the fact that the optimal temperature parameter for each sample can be different, as opposed to conventional approaches that use the same value for the entire distribution. To enable this, we propose a new training strategy called anchoring that can estimate appropriate temperature values for each sample, leading to state-of-the-art OOD detection performance across several benchmarks. Using NTK theory, we show that this temperature function estimate is closely linked to the epistemic uncertainty of the classifier, which explains its behavior. In contrast to some of the best-performing OOD detection approaches, our method does not require exposure to additional outlier datasets, custom calibration objectives, or model ensembling. Through empirical studies with different OOD detection settings – far OOD, near OOD, and semantically coherent OOD - we establish a highly effective OOD detection approach.

Cite this Paper


BibTeX
@InProceedings{pmlr-v189-anirudh23a, title = {Out of Distribution Detection via Neural Network Anchoring}, author = {Anirudh, Rushil and Thiagarajan, Jayaraman J.}, booktitle = {Proceedings of The 14th Asian Conference on Machine Learning}, pages = {32--47}, year = {2023}, editor = {Khan, Emtiyaz and Gonen, Mehmet}, volume = {189}, series = {Proceedings of Machine Learning Research}, month = {12--14 Dec}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v189/anirudh23a/anirudh23a.pdf}, url = {https://proceedings.mlr.press/v189/anirudh23a.html}, abstract = {Our goal in this paper is to exploit heteroscedastic temperature scaling as a calibration strategy for out of distribution (OOD) detection. Heteroscedasticity here refers to the fact that the optimal temperature parameter for each sample can be different, as opposed to conventional approaches that use the same value for the entire distribution. To enable this, we propose a new training strategy called anchoring that can estimate appropriate temperature values for each sample, leading to state-of-the-art OOD detection performance across several benchmarks. Using NTK theory, we show that this temperature function estimate is closely linked to the epistemic uncertainty of the classifier, which explains its behavior. In contrast to some of the best-performing OOD detection approaches, our method does not require exposure to additional outlier datasets, custom calibration objectives, or model ensembling. Through empirical studies with different OOD detection settings – far OOD, near OOD, and semantically coherent OOD - we establish a highly effective OOD detection approach.} }
Endnote
%0 Conference Paper %T Out of Distribution Detection via Neural Network Anchoring %A Rushil Anirudh %A Jayaraman J. Thiagarajan %B Proceedings of The 14th Asian Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2023 %E Emtiyaz Khan %E Mehmet Gonen %F pmlr-v189-anirudh23a %I PMLR %P 32--47 %U https://proceedings.mlr.press/v189/anirudh23a.html %V 189 %X Our goal in this paper is to exploit heteroscedastic temperature scaling as a calibration strategy for out of distribution (OOD) detection. Heteroscedasticity here refers to the fact that the optimal temperature parameter for each sample can be different, as opposed to conventional approaches that use the same value for the entire distribution. To enable this, we propose a new training strategy called anchoring that can estimate appropriate temperature values for each sample, leading to state-of-the-art OOD detection performance across several benchmarks. Using NTK theory, we show that this temperature function estimate is closely linked to the epistemic uncertainty of the classifier, which explains its behavior. In contrast to some of the best-performing OOD detection approaches, our method does not require exposure to additional outlier datasets, custom calibration objectives, or model ensembling. Through empirical studies with different OOD detection settings – far OOD, near OOD, and semantically coherent OOD - we establish a highly effective OOD detection approach.
APA
Anirudh, R. & Thiagarajan, J.J.. (2023). Out of Distribution Detection via Neural Network Anchoring. Proceedings of The 14th Asian Conference on Machine Learning, in Proceedings of Machine Learning Research 189:32-47 Available from https://proceedings.mlr.press/v189/anirudh23a.html.

Related Material