Approximate inference for the loss-calibrated Bayesian

Simon Lacoste–Julien, Ferenc Huszár, Zoubin Ghahramani
Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, PMLR 15:416-424, 2011.

Abstract

We consider the problem of approximate inference in the context of Bayesian decision theory. Traditional approaches focus on approximating general properties of the posterior, ignoring the decision task – and associated losses – for which the posterior could be used. We argue that this can be suboptimal and propose instead to loss-calibrate the approximate inference methods with respect to the decision task at hand. We present a general framework rooted in Bayesian decision theory to analyze approximate inference from the perspective of losses, opening up several research directions. As a first loss-calibrated approximate inference attempt, we propose an EM-like algorithm on the Bayesian posterior risk and show how it can improve a standard approach to Gaussian process classification when losses are asymmetric.

Cite this Paper


BibTeX
@InProceedings{pmlr-v15-lacoste_julien11a, title = {Approximate inference for the loss-calibrated Bayesian}, author = {Lacoste–Julien, Simon and Huszár, Ferenc and Ghahramani, Zoubin}, booktitle = {Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics}, pages = {416--424}, year = {2011}, editor = {Gordon, Geoffrey and Dunson, David and Dudík, Miroslav}, volume = {15}, series = {Proceedings of Machine Learning Research}, address = {Fort Lauderdale, FL, USA}, month = {11--13 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v15/lacoste_julien11a/lacoste_julien11a.pdf}, url = {https://proceedings.mlr.press/v15/lacoste_julien11a.html}, abstract = {We consider the problem of approximate inference in the context of Bayesian decision theory. Traditional approaches focus on approximating general properties of the posterior, ignoring the decision task – and associated losses – for which the posterior could be used. We argue that this can be suboptimal and propose instead to loss-calibrate the approximate inference methods with respect to the decision task at hand. We present a general framework rooted in Bayesian decision theory to analyze approximate inference from the perspective of losses, opening up several research directions. As a first loss-calibrated approximate inference attempt, we propose an EM-like algorithm on the Bayesian posterior risk and show how it can improve a standard approach to Gaussian process classification when losses are asymmetric.} }
Endnote
%0 Conference Paper %T Approximate inference for the loss-calibrated Bayesian %A Simon Lacoste–Julien %A Ferenc Huszár %A Zoubin Ghahramani %B Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2011 %E Geoffrey Gordon %E David Dunson %E Miroslav Dudík %F pmlr-v15-lacoste_julien11a %I PMLR %P 416--424 %U https://proceedings.mlr.press/v15/lacoste_julien11a.html %V 15 %X We consider the problem of approximate inference in the context of Bayesian decision theory. Traditional approaches focus on approximating general properties of the posterior, ignoring the decision task – and associated losses – for which the posterior could be used. We argue that this can be suboptimal and propose instead to loss-calibrate the approximate inference methods with respect to the decision task at hand. We present a general framework rooted in Bayesian decision theory to analyze approximate inference from the perspective of losses, opening up several research directions. As a first loss-calibrated approximate inference attempt, we propose an EM-like algorithm on the Bayesian posterior risk and show how it can improve a standard approach to Gaussian process classification when losses are asymmetric.
RIS
TY - CPAPER TI - Approximate inference for the loss-calibrated Bayesian AU - Simon Lacoste–Julien AU - Ferenc Huszár AU - Zoubin Ghahramani BT - Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics DA - 2011/06/14 ED - Geoffrey Gordon ED - David Dunson ED - Miroslav Dudík ID - pmlr-v15-lacoste_julien11a PB - PMLR DP - Proceedings of Machine Learning Research VL - 15 SP - 416 EP - 424 L1 - http://proceedings.mlr.press/v15/lacoste_julien11a/lacoste_julien11a.pdf UR - https://proceedings.mlr.press/v15/lacoste_julien11a.html AB - We consider the problem of approximate inference in the context of Bayesian decision theory. Traditional approaches focus on approximating general properties of the posterior, ignoring the decision task – and associated losses – for which the posterior could be used. We argue that this can be suboptimal and propose instead to loss-calibrate the approximate inference methods with respect to the decision task at hand. We present a general framework rooted in Bayesian decision theory to analyze approximate inference from the perspective of losses, opening up several research directions. As a first loss-calibrated approximate inference attempt, we propose an EM-like algorithm on the Bayesian posterior risk and show how it can improve a standard approach to Gaussian process classification when losses are asymmetric. ER -
APA
Lacoste–Julien, S., Huszár, F. & Ghahramani, Z.. (2011). Approximate inference for the loss-calibrated Bayesian. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 15:416-424 Available from https://proceedings.mlr.press/v15/lacoste_julien11a.html.

Related Material