Confidence Scoring Using Whitebox Meta-models with Linear Classifier Probes

[edit]

Tongfei Chen, Jiri Navratil, Vijay Iyengar, Karthikeyan Shanmugam ;
Proceedings of Machine Learning Research, PMLR 89:1467-1475, 2019.

Abstract

We propose a novel confidence scoring mechanism for deep neural networks based on a two-model paradigm involving a base model and a meta-model. The confidence score is learned by the meta-model observing the base model succeeding/failing at its task. As features to the meta-model, we investigate linear classifier probes inserted between the various layers of the base model. Our experiments demonstrate that this approach outperforms multiple baselines in a filtering task, i.e., task of rejecting samples with low confidence. Experimental results are presented using CIFAR-10 and CIFAR-100 dataset with and without added noise. We discuss the importance of confidence scoring to bridge the gap between experimental and real-world applications.

Related Material