Learning Prediction Intervals for Regression: Generalization and Calibration

Haoxian Chen, Ziyi Huang, Henry Lam, Huajie Qian, Haofeng Zhang
Proceedings of The 24th International Conference on Artificial Intelligence and Statistics, PMLR 130:820-828, 2021.

Abstract

We study the generation of prediction intervals in regression for uncertainty quantification. This task can be formalized as an empirical constrained optimization problem that minimizes the average interval width while maintaining the coverage accuracy across data. We strengthen the existing literature by studying two aspects of this empirical optimization. First is a general learning theory to characterize the optimality-feasibility tradeoff that encompasses Lipschitz continuity and VC-subgraph classes, which are exemplified in regression trees and neural networks. Second is a calibration machinery and the corresponding statistical theory to optimally select the regularization parameter that manages this tradeoff, which bypasses the overfitting issues in previous approaches in coverage attainment. We empirically demonstrate the strengths of our interval generation and calibration algorithms in terms of testing performances compared to existing benchmarks.

Cite this Paper


BibTeX
@InProceedings{pmlr-v130-chen21b, title = { Learning Prediction Intervals for Regression: Generalization and Calibration }, author = {Chen, Haoxian and Huang, Ziyi and Lam, Henry and Qian, Huajie and Zhang, Haofeng}, booktitle = {Proceedings of The 24th International Conference on Artificial Intelligence and Statistics}, pages = {820--828}, year = {2021}, editor = {Banerjee, Arindam and Fukumizu, Kenji}, volume = {130}, series = {Proceedings of Machine Learning Research}, month = {13--15 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v130/chen21b/chen21b.pdf}, url = {https://proceedings.mlr.press/v130/chen21b.html}, abstract = { We study the generation of prediction intervals in regression for uncertainty quantification. This task can be formalized as an empirical constrained optimization problem that minimizes the average interval width while maintaining the coverage accuracy across data. We strengthen the existing literature by studying two aspects of this empirical optimization. First is a general learning theory to characterize the optimality-feasibility tradeoff that encompasses Lipschitz continuity and VC-subgraph classes, which are exemplified in regression trees and neural networks. Second is a calibration machinery and the corresponding statistical theory to optimally select the regularization parameter that manages this tradeoff, which bypasses the overfitting issues in previous approaches in coverage attainment. We empirically demonstrate the strengths of our interval generation and calibration algorithms in terms of testing performances compared to existing benchmarks. } }
Endnote
%0 Conference Paper %T Learning Prediction Intervals for Regression: Generalization and Calibration %A Haoxian Chen %A Ziyi Huang %A Henry Lam %A Huajie Qian %A Haofeng Zhang %B Proceedings of The 24th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2021 %E Arindam Banerjee %E Kenji Fukumizu %F pmlr-v130-chen21b %I PMLR %P 820--828 %U https://proceedings.mlr.press/v130/chen21b.html %V 130 %X We study the generation of prediction intervals in regression for uncertainty quantification. This task can be formalized as an empirical constrained optimization problem that minimizes the average interval width while maintaining the coverage accuracy across data. We strengthen the existing literature by studying two aspects of this empirical optimization. First is a general learning theory to characterize the optimality-feasibility tradeoff that encompasses Lipschitz continuity and VC-subgraph classes, which are exemplified in regression trees and neural networks. Second is a calibration machinery and the corresponding statistical theory to optimally select the regularization parameter that manages this tradeoff, which bypasses the overfitting issues in previous approaches in coverage attainment. We empirically demonstrate the strengths of our interval generation and calibration algorithms in terms of testing performances compared to existing benchmarks.
APA
Chen, H., Huang, Z., Lam, H., Qian, H. & Zhang, H.. (2021). Learning Prediction Intervals for Regression: Generalization and Calibration . Proceedings of The 24th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 130:820-828 Available from https://proceedings.mlr.press/v130/chen21b.html.

Related Material