Test Sample Accuracy Scales with Training Sample Density in Neural Networks

Xu Ji, Razvan Pascanu, R. Devon Hjelm, Balaji Lakshminarayanan, Andrea Vedaldi
Proceedings of The 1st Conference on Lifelong Learning Agents, PMLR 199:629-646, 2022.

Abstract

Intuitively, one would expect accuracy of a trained neural network’s prediction on test samples to correlate with how densely the samples are surrounded by seen training samples in representation space. We find that a bound on empirical training error smoothed across linear activation regions scales inversely with training sample density in representation space. Empirically, we verify this bound is a strong predictor of the inaccuracy of the network’s prediction on test samples. For unseen test sets, including those with out-of-distribution samples, ranking test samples by their local region’s error bound and discarding samples with the highest bounds raises prediction accuracy by up to 20% in absolute terms for image classification datasets, on average over thresholds.

Cite this Paper


BibTeX
@InProceedings{pmlr-v199-ji22a, title = {Test Sample Accuracy Scales with Training Sample Density in Neural Networks}, author = {Ji, Xu and Pascanu, Razvan and Hjelm, R. Devon and Lakshminarayanan, Balaji and Vedaldi, Andrea}, booktitle = {Proceedings of The 1st Conference on Lifelong Learning Agents}, pages = {629--646}, year = {2022}, editor = {Chandar, Sarath and Pascanu, Razvan and Precup, Doina}, volume = {199}, series = {Proceedings of Machine Learning Research}, month = {22--24 Aug}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v199/ji22a/ji22a.pdf}, url = {https://proceedings.mlr.press/v199/ji22a.html}, abstract = {Intuitively, one would expect accuracy of a trained neural network’s prediction on test samples to correlate with how densely the samples are surrounded by seen training samples in representation space. We find that a bound on empirical training error smoothed across linear activation regions scales inversely with training sample density in representation space. Empirically, we verify this bound is a strong predictor of the inaccuracy of the network’s prediction on test samples. For unseen test sets, including those with out-of-distribution samples, ranking test samples by their local region’s error bound and discarding samples with the highest bounds raises prediction accuracy by up to 20% in absolute terms for image classification datasets, on average over thresholds.} }
Endnote
%0 Conference Paper %T Test Sample Accuracy Scales with Training Sample Density in Neural Networks %A Xu Ji %A Razvan Pascanu %A R. Devon Hjelm %A Balaji Lakshminarayanan %A Andrea Vedaldi %B Proceedings of The 1st Conference on Lifelong Learning Agents %C Proceedings of Machine Learning Research %D 2022 %E Sarath Chandar %E Razvan Pascanu %E Doina Precup %F pmlr-v199-ji22a %I PMLR %P 629--646 %U https://proceedings.mlr.press/v199/ji22a.html %V 199 %X Intuitively, one would expect accuracy of a trained neural network’s prediction on test samples to correlate with how densely the samples are surrounded by seen training samples in representation space. We find that a bound on empirical training error smoothed across linear activation regions scales inversely with training sample density in representation space. Empirically, we verify this bound is a strong predictor of the inaccuracy of the network’s prediction on test samples. For unseen test sets, including those with out-of-distribution samples, ranking test samples by their local region’s error bound and discarding samples with the highest bounds raises prediction accuracy by up to 20% in absolute terms for image classification datasets, on average over thresholds.
APA
Ji, X., Pascanu, R., Hjelm, R.D., Lakshminarayanan, B. & Vedaldi, A.. (2022). Test Sample Accuracy Scales with Training Sample Density in Neural Networks. Proceedings of The 1st Conference on Lifelong Learning Agents, in Proceedings of Machine Learning Research 199:629-646 Available from https://proceedings.mlr.press/v199/ji22a.html.

Related Material