Risk Bounds on Aleatoric Uncertainty Recovery

Yikai Zhang, Jiahe Lin, Fengpei Li, Yeshaya Adler, Kashif Rasul, Anderson Schneider, Yuriy Nevmyvaka
Proceedings of The 26th International Conference on Artificial Intelligence and Statistics, PMLR 206:6015-6036, 2023.

Abstract

Quantifying aleatoric uncertainty is a challenging task in machine learning. It is important for decision making associated with data-dependent uncertainty in model outcomes. Recently, many empirical studies in modeling aleatoric uncertainty under regression settings primarily rely on either a Gaussian likelihood or moment matching. However, the performance of these methods varies for different datasets whereas discussions on their theoretical guarantees are lacking. In this work, we investigate theoretical aspects of these approaches and establish risk bounds for their estimates. We provide conditions that are sufficient to guarantee the PAC-learnablility of the aleatoric uncertainty. The study suggests that the likelihood and moment matching-based methods enjoy different types of guarantee in their risk bounds, i.e., they calibrate different aspects of the uncertainty and thus exhibit distinct properties in different regimes of the parameter space. Finally, we conduct empirical study which shows promising results and supports our theorems.

Cite this Paper


BibTeX
@InProceedings{pmlr-v206-zhang23h, title = {Risk Bounds on Aleatoric Uncertainty Recovery}, author = {Zhang, Yikai and Lin, Jiahe and Li, Fengpei and Adler, Yeshaya and Rasul, Kashif and Schneider, Anderson and Nevmyvaka, Yuriy}, booktitle = {Proceedings of The 26th International Conference on Artificial Intelligence and Statistics}, pages = {6015--6036}, year = {2023}, editor = {Ruiz, Francisco and Dy, Jennifer and van de Meent, Jan-Willem}, volume = {206}, series = {Proceedings of Machine Learning Research}, month = {25--27 Apr}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v206/zhang23h/zhang23h.pdf}, url = {https://proceedings.mlr.press/v206/zhang23h.html}, abstract = {Quantifying aleatoric uncertainty is a challenging task in machine learning. It is important for decision making associated with data-dependent uncertainty in model outcomes. Recently, many empirical studies in modeling aleatoric uncertainty under regression settings primarily rely on either a Gaussian likelihood or moment matching. However, the performance of these methods varies for different datasets whereas discussions on their theoretical guarantees are lacking. In this work, we investigate theoretical aspects of these approaches and establish risk bounds for their estimates. We provide conditions that are sufficient to guarantee the PAC-learnablility of the aleatoric uncertainty. The study suggests that the likelihood and moment matching-based methods enjoy different types of guarantee in their risk bounds, i.e., they calibrate different aspects of the uncertainty and thus exhibit distinct properties in different regimes of the parameter space. Finally, we conduct empirical study which shows promising results and supports our theorems.} }
Endnote
%0 Conference Paper %T Risk Bounds on Aleatoric Uncertainty Recovery %A Yikai Zhang %A Jiahe Lin %A Fengpei Li %A Yeshaya Adler %A Kashif Rasul %A Anderson Schneider %A Yuriy Nevmyvaka %B Proceedings of The 26th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2023 %E Francisco Ruiz %E Jennifer Dy %E Jan-Willem van de Meent %F pmlr-v206-zhang23h %I PMLR %P 6015--6036 %U https://proceedings.mlr.press/v206/zhang23h.html %V 206 %X Quantifying aleatoric uncertainty is a challenging task in machine learning. It is important for decision making associated with data-dependent uncertainty in model outcomes. Recently, many empirical studies in modeling aleatoric uncertainty under regression settings primarily rely on either a Gaussian likelihood or moment matching. However, the performance of these methods varies for different datasets whereas discussions on their theoretical guarantees are lacking. In this work, we investigate theoretical aspects of these approaches and establish risk bounds for their estimates. We provide conditions that are sufficient to guarantee the PAC-learnablility of the aleatoric uncertainty. The study suggests that the likelihood and moment matching-based methods enjoy different types of guarantee in their risk bounds, i.e., they calibrate different aspects of the uncertainty and thus exhibit distinct properties in different regimes of the parameter space. Finally, we conduct empirical study which shows promising results and supports our theorems.
APA
Zhang, Y., Lin, J., Li, F., Adler, Y., Rasul, K., Schneider, A. & Nevmyvaka, Y.. (2023). Risk Bounds on Aleatoric Uncertainty Recovery. Proceedings of The 26th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 206:6015-6036 Available from https://proceedings.mlr.press/v206/zhang23h.html.

Related Material