Learning with Maximum A-Posteriori Perturbation Models

Andreea Gane, Tamir Hazan, Tommi Jaakkola
Proceedings of the Seventeenth International Conference on Artificial Intelligence and Statistics, PMLR 33:247-256, 2014.

Abstract

Perturbation models are families of distributions induced from perturbations. They combine randomization of the parameters with maximization to draw unbiased samples. Unlike Gibbs’ distributions, a perturbation model defined on the basis of low order statistics still gives rise to high order dependencies. In this paper, we analyze, extend and seek to estimate such dependencies from data. In particular, we shift the modelling focus from the parameters of the Gibbs’ distribution used as a base model to the space of perturbations. We estimate dependent perturbations over the parameters using a hard-EM approach, cast in the form of inverse convex programs. Each inverse program confines the randomization to the parameter polytope responsible for generating the observed answer. We illustrate the method on several computer vision problems.

Cite this Paper


BibTeX
@InProceedings{pmlr-v33-gane14, title = {{Learning with Maximum A-Posteriori Perturbation Models}}, author = {Gane, Andreea and Hazan, Tamir and Jaakkola, Tommi}, booktitle = {Proceedings of the Seventeenth International Conference on Artificial Intelligence and Statistics}, pages = {247--256}, year = {2014}, editor = {Kaski, Samuel and Corander, Jukka}, volume = {33}, series = {Proceedings of Machine Learning Research}, address = {Reykjavik, Iceland}, month = {22--25 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v33/gane14.pdf}, url = {https://proceedings.mlr.press/v33/gane14.html}, abstract = {Perturbation models are families of distributions induced from perturbations. They combine randomization of the parameters with maximization to draw unbiased samples. Unlike Gibbs’ distributions, a perturbation model defined on the basis of low order statistics still gives rise to high order dependencies. In this paper, we analyze, extend and seek to estimate such dependencies from data. In particular, we shift the modelling focus from the parameters of the Gibbs’ distribution used as a base model to the space of perturbations. We estimate dependent perturbations over the parameters using a hard-EM approach, cast in the form of inverse convex programs. Each inverse program confines the randomization to the parameter polytope responsible for generating the observed answer. We illustrate the method on several computer vision problems.} }
Endnote
%0 Conference Paper %T Learning with Maximum A-Posteriori Perturbation Models %A Andreea Gane %A Tamir Hazan %A Tommi Jaakkola %B Proceedings of the Seventeenth International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2014 %E Samuel Kaski %E Jukka Corander %F pmlr-v33-gane14 %I PMLR %P 247--256 %U https://proceedings.mlr.press/v33/gane14.html %V 33 %X Perturbation models are families of distributions induced from perturbations. They combine randomization of the parameters with maximization to draw unbiased samples. Unlike Gibbs’ distributions, a perturbation model defined on the basis of low order statistics still gives rise to high order dependencies. In this paper, we analyze, extend and seek to estimate such dependencies from data. In particular, we shift the modelling focus from the parameters of the Gibbs’ distribution used as a base model to the space of perturbations. We estimate dependent perturbations over the parameters using a hard-EM approach, cast in the form of inverse convex programs. Each inverse program confines the randomization to the parameter polytope responsible for generating the observed answer. We illustrate the method on several computer vision problems.
RIS
TY - CPAPER TI - Learning with Maximum A-Posteriori Perturbation Models AU - Andreea Gane AU - Tamir Hazan AU - Tommi Jaakkola BT - Proceedings of the Seventeenth International Conference on Artificial Intelligence and Statistics DA - 2014/04/02 ED - Samuel Kaski ED - Jukka Corander ID - pmlr-v33-gane14 PB - PMLR DP - Proceedings of Machine Learning Research VL - 33 SP - 247 EP - 256 L1 - http://proceedings.mlr.press/v33/gane14.pdf UR - https://proceedings.mlr.press/v33/gane14.html AB - Perturbation models are families of distributions induced from perturbations. They combine randomization of the parameters with maximization to draw unbiased samples. Unlike Gibbs’ distributions, a perturbation model defined on the basis of low order statistics still gives rise to high order dependencies. In this paper, we analyze, extend and seek to estimate such dependencies from data. In particular, we shift the modelling focus from the parameters of the Gibbs’ distribution used as a base model to the space of perturbations. We estimate dependent perturbations over the parameters using a hard-EM approach, cast in the form of inverse convex programs. Each inverse program confines the randomization to the parameter polytope responsible for generating the observed answer. We illustrate the method on several computer vision problems. ER -
APA
Gane, A., Hazan, T. & Jaakkola, T.. (2014). Learning with Maximum A-Posteriori Perturbation Models. Proceedings of the Seventeenth International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 33:247-256 Available from https://proceedings.mlr.press/v33/gane14.html.

Related Material