Learning to Solve the Constrained Most Probable Explanation Task in Probabilistic Graphical Models

Shivvrat Arya, Tahrima Rahman, Vibhav Gogate
Proceedings of The 27th International Conference on Artificial Intelligence and Statistics, PMLR 238:2791-2799, 2024.

Abstract

We propose a self-supervised learning approach for solving the following constrained optimization task in log-linear models or Markov networks. Let $f$ and $g$ be two log-linear models defined over the sets $X$ and $Y$ of random variables. Given an assignment $x$ to all variables in $X$ (evidence or observations) and a real number $q$, the constrained most-probable explanation (CMPE) task seeks to find an assignment $y$ to all variables in $Y$ such that $f(x, y)$ is maximized and $g(x, y) \leq q$. In our proposed self-supervised approach, given assignments $x$ to $X$ (data), we train a deep neural network that learns to output near-optimal solutions to the CMPE problem without requiring access to any pre-computed solutions. The key idea in our approach is to use first principles and approximate inference methods for CMPE to derive novel loss functions that seek to push infeasible solutions towards feasible ones and feasible solutions towards optimal ones. We analyze the properties of our proposed method and experimentally demonstrate its efficacy on several benchmark problems.

Cite this Paper


BibTeX
@InProceedings{pmlr-v238-arya24b, title = { Learning to Solve the Constrained Most Probable Explanation Task in Probabilistic Graphical Models }, author = {Arya, Shivvrat and Rahman, Tahrima and Gogate, Vibhav}, booktitle = {Proceedings of The 27th International Conference on Artificial Intelligence and Statistics}, pages = {2791--2799}, year = {2024}, editor = {Dasgupta, Sanjoy and Mandt, Stephan and Li, Yingzhen}, volume = {238}, series = {Proceedings of Machine Learning Research}, month = {02--04 May}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v238/arya24b/arya24b.pdf}, url = {https://proceedings.mlr.press/v238/arya24b.html}, abstract = { We propose a self-supervised learning approach for solving the following constrained optimization task in log-linear models or Markov networks. Let $f$ and $g$ be two log-linear models defined over the sets $X$ and $Y$ of random variables. Given an assignment $x$ to all variables in $X$ (evidence or observations) and a real number $q$, the constrained most-probable explanation (CMPE) task seeks to find an assignment $y$ to all variables in $Y$ such that $f(x, y)$ is maximized and $g(x, y) \leq q$. In our proposed self-supervised approach, given assignments $x$ to $X$ (data), we train a deep neural network that learns to output near-optimal solutions to the CMPE problem without requiring access to any pre-computed solutions. The key idea in our approach is to use first principles and approximate inference methods for CMPE to derive novel loss functions that seek to push infeasible solutions towards feasible ones and feasible solutions towards optimal ones. We analyze the properties of our proposed method and experimentally demonstrate its efficacy on several benchmark problems. } }
Endnote
%0 Conference Paper %T Learning to Solve the Constrained Most Probable Explanation Task in Probabilistic Graphical Models %A Shivvrat Arya %A Tahrima Rahman %A Vibhav Gogate %B Proceedings of The 27th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2024 %E Sanjoy Dasgupta %E Stephan Mandt %E Yingzhen Li %F pmlr-v238-arya24b %I PMLR %P 2791--2799 %U https://proceedings.mlr.press/v238/arya24b.html %V 238 %X We propose a self-supervised learning approach for solving the following constrained optimization task in log-linear models or Markov networks. Let $f$ and $g$ be two log-linear models defined over the sets $X$ and $Y$ of random variables. Given an assignment $x$ to all variables in $X$ (evidence or observations) and a real number $q$, the constrained most-probable explanation (CMPE) task seeks to find an assignment $y$ to all variables in $Y$ such that $f(x, y)$ is maximized and $g(x, y) \leq q$. In our proposed self-supervised approach, given assignments $x$ to $X$ (data), we train a deep neural network that learns to output near-optimal solutions to the CMPE problem without requiring access to any pre-computed solutions. The key idea in our approach is to use first principles and approximate inference methods for CMPE to derive novel loss functions that seek to push infeasible solutions towards feasible ones and feasible solutions towards optimal ones. We analyze the properties of our proposed method and experimentally demonstrate its efficacy on several benchmark problems.
APA
Arya, S., Rahman, T. & Gogate, V.. (2024). Learning to Solve the Constrained Most Probable Explanation Task in Probabilistic Graphical Models . Proceedings of The 27th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 238:2791-2799 Available from https://proceedings.mlr.press/v238/arya24b.html.

Related Material