Parameter Learning and Convergent Inference for Dense Random Fields

Philipp Kraehenbuehl, Vladlen Koltun
Proceedings of the 30th International Conference on Machine Learning, PMLR 28(3):513-521, 2013.

Abstract

Dense random fields are models in which all pairs of variables are directly connected by pairwise potentials. It has recently been shown that mean field inference in dense random fields can be performed efficiently and that these models enable significant accuracy gains in computer vision applications. However, parameter estimation for dense random fields is still poorly understood. In this paper, we present an efficient algorithm for learning parameters in dense random fields. All parameters are estimated jointly, thus capturing dependencies between them. We show that gradients of a variety of loss functions over the mean field marginals can be computed efficiently. The resulting algorithm learns parameters that directly optimize the performance of mean field inference in the model. As a supporting result, we present an efficient inference algorithm for dense random fields that is guaranteed to converge.

Cite this Paper


BibTeX
@InProceedings{pmlr-v28-kraehenbuehl13, title = {Parameter Learning and Convergent Inference for Dense Random Fields}, author = {Kraehenbuehl, Philipp and Koltun, Vladlen}, booktitle = {Proceedings of the 30th International Conference on Machine Learning}, pages = {513--521}, year = {2013}, editor = {Dasgupta, Sanjoy and McAllester, David}, volume = {28}, number = {3}, series = {Proceedings of Machine Learning Research}, address = {Atlanta, Georgia, USA}, month = {17--19 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v28/kraehenbuehl13.pdf}, url = {https://proceedings.mlr.press/v28/kraehenbuehl13.html}, abstract = {Dense random fields are models in which all pairs of variables are directly connected by pairwise potentials. It has recently been shown that mean field inference in dense random fields can be performed efficiently and that these models enable significant accuracy gains in computer vision applications. However, parameter estimation for dense random fields is still poorly understood. In this paper, we present an efficient algorithm for learning parameters in dense random fields. All parameters are estimated jointly, thus capturing dependencies between them. We show that gradients of a variety of loss functions over the mean field marginals can be computed efficiently. The resulting algorithm learns parameters that directly optimize the performance of mean field inference in the model. As a supporting result, we present an efficient inference algorithm for dense random fields that is guaranteed to converge.} }
Endnote
%0 Conference Paper %T Parameter Learning and Convergent Inference for Dense Random Fields %A Philipp Kraehenbuehl %A Vladlen Koltun %B Proceedings of the 30th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2013 %E Sanjoy Dasgupta %E David McAllester %F pmlr-v28-kraehenbuehl13 %I PMLR %P 513--521 %U https://proceedings.mlr.press/v28/kraehenbuehl13.html %V 28 %N 3 %X Dense random fields are models in which all pairs of variables are directly connected by pairwise potentials. It has recently been shown that mean field inference in dense random fields can be performed efficiently and that these models enable significant accuracy gains in computer vision applications. However, parameter estimation for dense random fields is still poorly understood. In this paper, we present an efficient algorithm for learning parameters in dense random fields. All parameters are estimated jointly, thus capturing dependencies between them. We show that gradients of a variety of loss functions over the mean field marginals can be computed efficiently. The resulting algorithm learns parameters that directly optimize the performance of mean field inference in the model. As a supporting result, we present an efficient inference algorithm for dense random fields that is guaranteed to converge.
RIS
TY - CPAPER TI - Parameter Learning and Convergent Inference for Dense Random Fields AU - Philipp Kraehenbuehl AU - Vladlen Koltun BT - Proceedings of the 30th International Conference on Machine Learning DA - 2013/05/26 ED - Sanjoy Dasgupta ED - David McAllester ID - pmlr-v28-kraehenbuehl13 PB - PMLR DP - Proceedings of Machine Learning Research VL - 28 IS - 3 SP - 513 EP - 521 L1 - http://proceedings.mlr.press/v28/kraehenbuehl13.pdf UR - https://proceedings.mlr.press/v28/kraehenbuehl13.html AB - Dense random fields are models in which all pairs of variables are directly connected by pairwise potentials. It has recently been shown that mean field inference in dense random fields can be performed efficiently and that these models enable significant accuracy gains in computer vision applications. However, parameter estimation for dense random fields is still poorly understood. In this paper, we present an efficient algorithm for learning parameters in dense random fields. All parameters are estimated jointly, thus capturing dependencies between them. We show that gradients of a variety of loss functions over the mean field marginals can be computed efficiently. The resulting algorithm learns parameters that directly optimize the performance of mean field inference in the model. As a supporting result, we present an efficient inference algorithm for dense random fields that is guaranteed to converge. ER -
APA
Kraehenbuehl, P. & Koltun, V.. (2013). Parameter Learning and Convergent Inference for Dense Random Fields. Proceedings of the 30th International Conference on Machine Learning, in Proceedings of Machine Learning Research 28(3):513-521 Available from https://proceedings.mlr.press/v28/kraehenbuehl13.html.

Related Material