[edit]
Out of Distribution Detection via Neural Network Anchoring
Proceedings of The 14th Asian Conference on Machine
Learning, PMLR 189:32-47, 2023.
Abstract
Our goal in this paper is to exploit heteroscedastic
temperature scaling as a calibration strategy for
out of distribution (OOD)
detection. Heteroscedasticity here refers to the
fact that the optimal temperature parameter for each
sample can be different, as opposed to conventional
approaches that use the same value for the entire
distribution. To enable this, we propose a new
training strategy called anchoring that can estimate
appropriate temperature values for each sample,
leading to state-of-the-art OOD detection
performance across several benchmarks. Using NTK
theory, we show that this temperature function
estimate is closely linked to the epistemic
uncertainty of the classifier, which explains its
behavior. In contrast to some of the best-performing
OOD detection approaches, our method does not
require exposure to additional outlier datasets,
custom calibration objectives, or model
ensembling. Through empirical studies with different
OOD detection settings – far OOD, near OOD, and
semantically coherent OOD - we establish a highly
effective OOD detection approach.