Learning Distributionally Robust Tractable Probabilistic Models in Continuous Domains

Hailiang Dong, James Amato, Vibhav Gogate, Nicholas Ruozzi
Proceedings of the Fortieth Conference on Uncertainty in Artificial Intelligence, PMLR 244:1176-1188, 2024.

Abstract

Tractable probabilistic models (TPMs) have attracted substantial research interest in recent years, particularly because of their ability to answer various reasoning queries in polynomial time. In this study, we focus on the distributionally robust learning of continuous TPMs and address the challenge of distribution shift at test time by tackling the adversarial risk minimization problem within the framework of distributionally robust learning. Specifically, we demonstrate that the adversarial risk minimization problem can be efficiently addressed when the model permits exact log-likelihood evaluation and efficient learning on weighted data. Our experimental results on several real-world datasets show that our approach achieves significantly higher log-likelihoods on adversarial test sets. Remarkably, we note that the model learned via distributionally robust learning can achieve higher average log-likelihood on the initial uncorrupted test set at times.

Cite this Paper


BibTeX
@InProceedings{pmlr-v244-dong24b, title = {Learning Distributionally Robust Tractable Probabilistic Models in Continuous Domains}, author = {Dong, Hailiang and Amato, James and Gogate, Vibhav and Ruozzi, Nicholas}, booktitle = {Proceedings of the Fortieth Conference on Uncertainty in Artificial Intelligence}, pages = {1176--1188}, year = {2024}, editor = {Kiyavash, Negar and Mooij, Joris M.}, volume = {244}, series = {Proceedings of Machine Learning Research}, month = {15--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v244/main/assets/dong24b/dong24b.pdf}, url = {https://proceedings.mlr.press/v244/dong24b.html}, abstract = {Tractable probabilistic models (TPMs) have attracted substantial research interest in recent years, particularly because of their ability to answer various reasoning queries in polynomial time. In this study, we focus on the distributionally robust learning of continuous TPMs and address the challenge of distribution shift at test time by tackling the adversarial risk minimization problem within the framework of distributionally robust learning. Specifically, we demonstrate that the adversarial risk minimization problem can be efficiently addressed when the model permits exact log-likelihood evaluation and efficient learning on weighted data. Our experimental results on several real-world datasets show that our approach achieves significantly higher log-likelihoods on adversarial test sets. Remarkably, we note that the model learned via distributionally robust learning can achieve higher average log-likelihood on the initial uncorrupted test set at times.} }
Endnote
%0 Conference Paper %T Learning Distributionally Robust Tractable Probabilistic Models in Continuous Domains %A Hailiang Dong %A James Amato %A Vibhav Gogate %A Nicholas Ruozzi %B Proceedings of the Fortieth Conference on Uncertainty in Artificial Intelligence %C Proceedings of Machine Learning Research %D 2024 %E Negar Kiyavash %E Joris M. Mooij %F pmlr-v244-dong24b %I PMLR %P 1176--1188 %U https://proceedings.mlr.press/v244/dong24b.html %V 244 %X Tractable probabilistic models (TPMs) have attracted substantial research interest in recent years, particularly because of their ability to answer various reasoning queries in polynomial time. In this study, we focus on the distributionally robust learning of continuous TPMs and address the challenge of distribution shift at test time by tackling the adversarial risk minimization problem within the framework of distributionally robust learning. Specifically, we demonstrate that the adversarial risk minimization problem can be efficiently addressed when the model permits exact log-likelihood evaluation and efficient learning on weighted data. Our experimental results on several real-world datasets show that our approach achieves significantly higher log-likelihoods on adversarial test sets. Remarkably, we note that the model learned via distributionally robust learning can achieve higher average log-likelihood on the initial uncorrupted test set at times.
APA
Dong, H., Amato, J., Gogate, V. & Ruozzi, N.. (2024). Learning Distributionally Robust Tractable Probabilistic Models in Continuous Domains. Proceedings of the Fortieth Conference on Uncertainty in Artificial Intelligence, in Proceedings of Machine Learning Research 244:1176-1188 Available from https://proceedings.mlr.press/v244/dong24b.html.

Related Material