Beyond the Norms: Detecting Prediction Errors in Regression Models

Andres Altieri, Marco Romanelli, Georg Pichler, Florence Alberge, Pablo Piantanida
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:1186-1221, 2024.

Abstract

This paper tackles the challenge of detecting unreliable behavior in regression algorithms, which may arise from intrinsic variability (e.g., aleatoric uncertainty) or modeling errors (e.g., model uncertainty). First, we formally introduce the notion of unreliability in regression, i.e., when the output of the regressor exceeds a specified discrepancy (or error). Then, using powerful tools for probabilistic modeling, we estimate the discrepancy density, and we measure its statistical diversity using our proposed metric for statistical dissimilarity. In turn, this allows us to derive a data-driven score that expresses the uncertainty of the regression outcome. We show empirical improvements in error detection for multiple regression tasks, consistently outperforming popular baseline approaches, and contributing to the broader field of uncertainty quantification and safe machine learning systems.

Cite this Paper


BibTeX
@InProceedings{pmlr-v235-altieri24a, title = {Beyond the Norms: Detecting Prediction Errors in Regression Models}, author = {Altieri, Andres and Romanelli, Marco and Pichler, Georg and Alberge, Florence and Piantanida, Pablo}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {1186--1221}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v235/main/assets/altieri24a/altieri24a.pdf}, url = {https://proceedings.mlr.press/v235/altieri24a.html}, abstract = {This paper tackles the challenge of detecting unreliable behavior in regression algorithms, which may arise from intrinsic variability (e.g., aleatoric uncertainty) or modeling errors (e.g., model uncertainty). First, we formally introduce the notion of unreliability in regression, i.e., when the output of the regressor exceeds a specified discrepancy (or error). Then, using powerful tools for probabilistic modeling, we estimate the discrepancy density, and we measure its statistical diversity using our proposed metric for statistical dissimilarity. In turn, this allows us to derive a data-driven score that expresses the uncertainty of the regression outcome. We show empirical improvements in error detection for multiple regression tasks, consistently outperforming popular baseline approaches, and contributing to the broader field of uncertainty quantification and safe machine learning systems.} }
Endnote
%0 Conference Paper %T Beyond the Norms: Detecting Prediction Errors in Regression Models %A Andres Altieri %A Marco Romanelli %A Georg Pichler %A Florence Alberge %A Pablo Piantanida %B Proceedings of the 41st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Ruslan Salakhutdinov %E Zico Kolter %E Katherine Heller %E Adrian Weller %E Nuria Oliver %E Jonathan Scarlett %E Felix Berkenkamp %F pmlr-v235-altieri24a %I PMLR %P 1186--1221 %U https://proceedings.mlr.press/v235/altieri24a.html %V 235 %X This paper tackles the challenge of detecting unreliable behavior in regression algorithms, which may arise from intrinsic variability (e.g., aleatoric uncertainty) or modeling errors (e.g., model uncertainty). First, we formally introduce the notion of unreliability in regression, i.e., when the output of the regressor exceeds a specified discrepancy (or error). Then, using powerful tools for probabilistic modeling, we estimate the discrepancy density, and we measure its statistical diversity using our proposed metric for statistical dissimilarity. In turn, this allows us to derive a data-driven score that expresses the uncertainty of the regression outcome. We show empirical improvements in error detection for multiple regression tasks, consistently outperforming popular baseline approaches, and contributing to the broader field of uncertainty quantification and safe machine learning systems.
APA
Altieri, A., Romanelli, M., Pichler, G., Alberge, F. & Piantanida, P.. (2024). Beyond the Norms: Detecting Prediction Errors in Regression Models. Proceedings of the 41st International Conference on Machine Learning, in Proceedings of Machine Learning Research 235:1186-1221 Available from https://proceedings.mlr.press/v235/altieri24a.html.

Related Material