Confidence Scoring Using Whitebox Meta-models with Linear Classifier Probes

Tongfei Chen, Jiri Navratil, Vijay Iyengar, Karthikeyan Shanmugam
Proceedings of the Twenty-Second International Conference on Artificial Intelligence and Statistics, PMLR 89:1467-1475, 2019.

Abstract

We propose a novel confidence scoring mechanism for deep neural networks based on a two-model paradigm involving a base model and a meta-model. The confidence score is learned by the meta-model observing the base model succeeding/failing at its task. As features to the meta-model, we investigate linear classifier probes inserted between the various layers of the base model. Our experiments demonstrate that this approach outperforms multiple baselines in a filtering task, i.e., task of rejecting samples with low confidence. Experimental results are presented using CIFAR-10 and CIFAR-100 dataset with and without added noise. We discuss the importance of confidence scoring to bridge the gap between experimental and real-world applications.

Cite this Paper


BibTeX
@InProceedings{pmlr-v89-chen19c, title = {Confidence Scoring Using Whitebox Meta-models with Linear Classifier Probes}, author = {Chen, Tongfei and Navratil, Jiri and Iyengar, Vijay and Shanmugam, Karthikeyan}, booktitle = {Proceedings of the Twenty-Second International Conference on Artificial Intelligence and Statistics}, pages = {1467--1475}, year = {2019}, editor = {Chaudhuri, Kamalika and Sugiyama, Masashi}, volume = {89}, series = {Proceedings of Machine Learning Research}, month = {16--18 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v89/chen19c/chen19c.pdf}, url = {https://proceedings.mlr.press/v89/chen19c.html}, abstract = {We propose a novel confidence scoring mechanism for deep neural networks based on a two-model paradigm involving a base model and a meta-model. The confidence score is learned by the meta-model observing the base model succeeding/failing at its task. As features to the meta-model, we investigate linear classifier probes inserted between the various layers of the base model. Our experiments demonstrate that this approach outperforms multiple baselines in a filtering task, i.e., task of rejecting samples with low confidence. Experimental results are presented using CIFAR-10 and CIFAR-100 dataset with and without added noise. We discuss the importance of confidence scoring to bridge the gap between experimental and real-world applications.} }
Endnote
%0 Conference Paper %T Confidence Scoring Using Whitebox Meta-models with Linear Classifier Probes %A Tongfei Chen %A Jiri Navratil %A Vijay Iyengar %A Karthikeyan Shanmugam %B Proceedings of the Twenty-Second International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2019 %E Kamalika Chaudhuri %E Masashi Sugiyama %F pmlr-v89-chen19c %I PMLR %P 1467--1475 %U https://proceedings.mlr.press/v89/chen19c.html %V 89 %X We propose a novel confidence scoring mechanism for deep neural networks based on a two-model paradigm involving a base model and a meta-model. The confidence score is learned by the meta-model observing the base model succeeding/failing at its task. As features to the meta-model, we investigate linear classifier probes inserted between the various layers of the base model. Our experiments demonstrate that this approach outperforms multiple baselines in a filtering task, i.e., task of rejecting samples with low confidence. Experimental results are presented using CIFAR-10 and CIFAR-100 dataset with and without added noise. We discuss the importance of confidence scoring to bridge the gap between experimental and real-world applications.
APA
Chen, T., Navratil, J., Iyengar, V. & Shanmugam, K.. (2019). Confidence Scoring Using Whitebox Meta-models with Linear Classifier Probes. Proceedings of the Twenty-Second International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 89:1467-1475 Available from https://proceedings.mlr.press/v89/chen19c.html.

Related Material