Risk and Regret of Hierarchical Bayesian Learners

Jonathan Huggins, Josh Tenenbaum
Proceedings of the 32nd International Conference on Machine Learning, PMLR 37:1442-1451, 2015.

Abstract

Common statistical practice has shown that the full power of Bayesian methods is not realized until hierarchical priors are used, as these allow for greater “robustness” and the ability to “share statistical strength.” Yet it is an ongoing challenge to provide a learning-theoretically sound formalism of such notions that: offers practical guidance concerning when and how best to utilize hierarchical models; provides insights into what makes for a good hierarchical prior; and, when the form of the prior has been chosen, can guide the choice of hyperparameter settings. We present a set of analytical tools for understanding hierarchical priors in both the online and batch learning settings. We provide regret bounds under log-loss, which show how certain hierarchical models compare, in retrospect, to the best single model in the model class. We also show how to convert a Bayesian log-loss regret bound into a Bayesian risk bound for any bounded loss, a result which may be of independent interest. Risk and regret bounds for Student’s t and hierarchical Gaussian priors allow us to formalize the concepts of “robustness” and “sharing statistical strength.” Priors for feature selection are investigated as well. Our results suggest that the learning-theoretic benefits of using hierarchical priors can often come at little cost on practical problems.

Cite this Paper


BibTeX
@InProceedings{pmlr-v37-hugginsb15, title = {Risk and Regret of Hierarchical Bayesian Learners}, author = {Huggins, Jonathan and Tenenbaum, Josh}, booktitle = {Proceedings of the 32nd International Conference on Machine Learning}, pages = {1442--1451}, year = {2015}, editor = {Bach, Francis and Blei, David}, volume = {37}, series = {Proceedings of Machine Learning Research}, address = {Lille, France}, month = {07--09 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v37/hugginsb15.pdf}, url = {https://proceedings.mlr.press/v37/hugginsb15.html}, abstract = {Common statistical practice has shown that the full power of Bayesian methods is not realized until hierarchical priors are used, as these allow for greater “robustness” and the ability to “share statistical strength.” Yet it is an ongoing challenge to provide a learning-theoretically sound formalism of such notions that: offers practical guidance concerning when and how best to utilize hierarchical models; provides insights into what makes for a good hierarchical prior; and, when the form of the prior has been chosen, can guide the choice of hyperparameter settings. We present a set of analytical tools for understanding hierarchical priors in both the online and batch learning settings. We provide regret bounds under log-loss, which show how certain hierarchical models compare, in retrospect, to the best single model in the model class. We also show how to convert a Bayesian log-loss regret bound into a Bayesian risk bound for any bounded loss, a result which may be of independent interest. Risk and regret bounds for Student’s t and hierarchical Gaussian priors allow us to formalize the concepts of “robustness” and “sharing statistical strength.” Priors for feature selection are investigated as well. Our results suggest that the learning-theoretic benefits of using hierarchical priors can often come at little cost on practical problems.} }
Endnote
%0 Conference Paper %T Risk and Regret of Hierarchical Bayesian Learners %A Jonathan Huggins %A Josh Tenenbaum %B Proceedings of the 32nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2015 %E Francis Bach %E David Blei %F pmlr-v37-hugginsb15 %I PMLR %P 1442--1451 %U https://proceedings.mlr.press/v37/hugginsb15.html %V 37 %X Common statistical practice has shown that the full power of Bayesian methods is not realized until hierarchical priors are used, as these allow for greater “robustness” and the ability to “share statistical strength.” Yet it is an ongoing challenge to provide a learning-theoretically sound formalism of such notions that: offers practical guidance concerning when and how best to utilize hierarchical models; provides insights into what makes for a good hierarchical prior; and, when the form of the prior has been chosen, can guide the choice of hyperparameter settings. We present a set of analytical tools for understanding hierarchical priors in both the online and batch learning settings. We provide regret bounds under log-loss, which show how certain hierarchical models compare, in retrospect, to the best single model in the model class. We also show how to convert a Bayesian log-loss regret bound into a Bayesian risk bound for any bounded loss, a result which may be of independent interest. Risk and regret bounds for Student’s t and hierarchical Gaussian priors allow us to formalize the concepts of “robustness” and “sharing statistical strength.” Priors for feature selection are investigated as well. Our results suggest that the learning-theoretic benefits of using hierarchical priors can often come at little cost on practical problems.
RIS
TY - CPAPER TI - Risk and Regret of Hierarchical Bayesian Learners AU - Jonathan Huggins AU - Josh Tenenbaum BT - Proceedings of the 32nd International Conference on Machine Learning DA - 2015/06/01 ED - Francis Bach ED - David Blei ID - pmlr-v37-hugginsb15 PB - PMLR DP - Proceedings of Machine Learning Research VL - 37 SP - 1442 EP - 1451 L1 - http://proceedings.mlr.press/v37/hugginsb15.pdf UR - https://proceedings.mlr.press/v37/hugginsb15.html AB - Common statistical practice has shown that the full power of Bayesian methods is not realized until hierarchical priors are used, as these allow for greater “robustness” and the ability to “share statistical strength.” Yet it is an ongoing challenge to provide a learning-theoretically sound formalism of such notions that: offers practical guidance concerning when and how best to utilize hierarchical models; provides insights into what makes for a good hierarchical prior; and, when the form of the prior has been chosen, can guide the choice of hyperparameter settings. We present a set of analytical tools for understanding hierarchical priors in both the online and batch learning settings. We provide regret bounds under log-loss, which show how certain hierarchical models compare, in retrospect, to the best single model in the model class. We also show how to convert a Bayesian log-loss regret bound into a Bayesian risk bound for any bounded loss, a result which may be of independent interest. Risk and regret bounds for Student’s t and hierarchical Gaussian priors allow us to formalize the concepts of “robustness” and “sharing statistical strength.” Priors for feature selection are investigated as well. Our results suggest that the learning-theoretic benefits of using hierarchical priors can often come at little cost on practical problems. ER -
APA
Huggins, J. & Tenenbaum, J.. (2015). Risk and Regret of Hierarchical Bayesian Learners. Proceedings of the 32nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 37:1442-1451 Available from https://proceedings.mlr.press/v37/hugginsb15.html.

Related Material