Knowledge Intensive Learning of Cutset Networks

Saurabh Mathur, Vibhav Gogate, Sriraam Natarajan
Proceedings of the Thirty-Ninth Conference on Uncertainty in Artificial Intelligence, PMLR 216:1380-1389, 2023.

Abstract

Cutset networks (CNs) are interpretable probabilistic representations that combine probability trees and tree Bayesian networks, to model and reason about large multi-dimensional probability distributions. Motivated by high-stakes applications in domains such as healthcare where (a) rich domain knowledge in the form of qualitative influences is readily available and (b) use of interpretable models that the user can efficiently probe and infer over is often necessary, we focus on learning CNs in the presence of qualitative influences. We propose a penalized objective function that uses the influences as constraints, and develop a gradient-based learning algorithm, KICN. We show that because CNs are tractable, KICN is guaranteed to converge to a local maximum of the penalized objective function. Our experiments on several benchmark data sets show that our new algorithm is superior to the state-of-the-art, especially when the data is scarce or noisy.

Cite this Paper


BibTeX
@InProceedings{pmlr-v216-mathur23a, title = {Knowledge Intensive Learning of Cutset Networks}, author = {Mathur, Saurabh and Gogate, Vibhav and Natarajan, Sriraam}, booktitle = {Proceedings of the Thirty-Ninth Conference on Uncertainty in Artificial Intelligence}, pages = {1380--1389}, year = {2023}, editor = {Evans, Robin J. and Shpitser, Ilya}, volume = {216}, series = {Proceedings of Machine Learning Research}, month = {31 Jul--04 Aug}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v216/mathur23a/mathur23a.pdf}, url = {https://proceedings.mlr.press/v216/mathur23a.html}, abstract = {Cutset networks (CNs) are interpretable probabilistic representations that combine probability trees and tree Bayesian networks, to model and reason about large multi-dimensional probability distributions. Motivated by high-stakes applications in domains such as healthcare where (a) rich domain knowledge in the form of qualitative influences is readily available and (b) use of interpretable models that the user can efficiently probe and infer over is often necessary, we focus on learning CNs in the presence of qualitative influences. We propose a penalized objective function that uses the influences as constraints, and develop a gradient-based learning algorithm, KICN. We show that because CNs are tractable, KICN is guaranteed to converge to a local maximum of the penalized objective function. Our experiments on several benchmark data sets show that our new algorithm is superior to the state-of-the-art, especially when the data is scarce or noisy.} }
Endnote
%0 Conference Paper %T Knowledge Intensive Learning of Cutset Networks %A Saurabh Mathur %A Vibhav Gogate %A Sriraam Natarajan %B Proceedings of the Thirty-Ninth Conference on Uncertainty in Artificial Intelligence %C Proceedings of Machine Learning Research %D 2023 %E Robin J. Evans %E Ilya Shpitser %F pmlr-v216-mathur23a %I PMLR %P 1380--1389 %U https://proceedings.mlr.press/v216/mathur23a.html %V 216 %X Cutset networks (CNs) are interpretable probabilistic representations that combine probability trees and tree Bayesian networks, to model and reason about large multi-dimensional probability distributions. Motivated by high-stakes applications in domains such as healthcare where (a) rich domain knowledge in the form of qualitative influences is readily available and (b) use of interpretable models that the user can efficiently probe and infer over is often necessary, we focus on learning CNs in the presence of qualitative influences. We propose a penalized objective function that uses the influences as constraints, and develop a gradient-based learning algorithm, KICN. We show that because CNs are tractable, KICN is guaranteed to converge to a local maximum of the penalized objective function. Our experiments on several benchmark data sets show that our new algorithm is superior to the state-of-the-art, especially when the data is scarce or noisy.
APA
Mathur, S., Gogate, V. & Natarajan, S.. (2023). Knowledge Intensive Learning of Cutset Networks. Proceedings of the Thirty-Ninth Conference on Uncertainty in Artificial Intelligence, in Proceedings of Machine Learning Research 216:1380-1389 Available from https://proceedings.mlr.press/v216/mathur23a.html.

Related Material