NeuroBE: Escalating neural network approximations of Bucket Elimination

Sakshi Agarwal, Kalev Kask, Alex Ihler, Rina Dechter
Proceedings of the Thirty-Eighth Conference on Uncertainty in Artificial Intelligence, PMLR 180:11-21, 2022.

Abstract

A major limiting factor in graphical model inference is the complexity of computing the partition function. Exact message-passing algorithms such as Bucket Elimination (BE) require exponential memory to compute the partition function; therefore, approximations are necessary. In this paper, we build upon a recently introduced methodology called Deep Bucket Elimination (DBE) that uses classical Neural Networks to approximate messages generated by BE for large buckets. The main feature of our new scheme, renamed NeuroBE, is that it customizes the architecture of the neural networks, their learning process and in particular, adapts the loss function to the internal form or distribution of messages. Our experiments demonstrate significant improvements in accuracy and time compared with the earlier DBE scheme.

Cite this Paper


BibTeX
@InProceedings{pmlr-v180-agarwal22a, title = {NeuroBE: Escalating neural network approximations of Bucket Elimination}, author = {Agarwal, Sakshi and Kask, Kalev and Ihler, Alex and Dechter, Rina}, booktitle = {Proceedings of the Thirty-Eighth Conference on Uncertainty in Artificial Intelligence}, pages = {11--21}, year = {2022}, editor = {Cussens, James and Zhang, Kun}, volume = {180}, series = {Proceedings of Machine Learning Research}, month = {01--05 Aug}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v180/agarwal22a/agarwal22a.pdf}, url = {https://proceedings.mlr.press/v180/agarwal22a.html}, abstract = {A major limiting factor in graphical model inference is the complexity of computing the partition function. Exact message-passing algorithms such as Bucket Elimination (BE) require exponential memory to compute the partition function; therefore, approximations are necessary. In this paper, we build upon a recently introduced methodology called Deep Bucket Elimination (DBE) that uses classical Neural Networks to approximate messages generated by BE for large buckets. The main feature of our new scheme, renamed NeuroBE, is that it customizes the architecture of the neural networks, their learning process and in particular, adapts the loss function to the internal form or distribution of messages. Our experiments demonstrate significant improvements in accuracy and time compared with the earlier DBE scheme.} }
Endnote
%0 Conference Paper %T NeuroBE: Escalating neural network approximations of Bucket Elimination %A Sakshi Agarwal %A Kalev Kask %A Alex Ihler %A Rina Dechter %B Proceedings of the Thirty-Eighth Conference on Uncertainty in Artificial Intelligence %C Proceedings of Machine Learning Research %D 2022 %E James Cussens %E Kun Zhang %F pmlr-v180-agarwal22a %I PMLR %P 11--21 %U https://proceedings.mlr.press/v180/agarwal22a.html %V 180 %X A major limiting factor in graphical model inference is the complexity of computing the partition function. Exact message-passing algorithms such as Bucket Elimination (BE) require exponential memory to compute the partition function; therefore, approximations are necessary. In this paper, we build upon a recently introduced methodology called Deep Bucket Elimination (DBE) that uses classical Neural Networks to approximate messages generated by BE for large buckets. The main feature of our new scheme, renamed NeuroBE, is that it customizes the architecture of the neural networks, their learning process and in particular, adapts the loss function to the internal form or distribution of messages. Our experiments demonstrate significant improvements in accuracy and time compared with the earlier DBE scheme.
APA
Agarwal, S., Kask, K., Ihler, A. & Dechter, R.. (2022). NeuroBE: Escalating neural network approximations of Bucket Elimination. Proceedings of the Thirty-Eighth Conference on Uncertainty in Artificial Intelligence, in Proceedings of Machine Learning Research 180:11-21 Available from https://proceedings.mlr.press/v180/agarwal22a.html.

Related Material