Adaptive Hashing for Model Counting

Jonathan Kuck, Tri Dao, Shengjia Zhao, Burak Bartan, Ashish Sabharwal, Stefano Ermon
Proceedings of The 35th Uncertainty in Artificial Intelligence Conference, PMLR 115:271-280, 2020.

Abstract

Randomized hashing algorithms have seen recent success in providing bounds on the model count of a propositional formula. These methods repeatedly check the satisfiability of a formula subject to increasingly stringent random constraints. Key to these approaches is the choice of a fixed family of hash functions that strikes a good balance between computational efficiency and statistical guarantees for a hypothetical worst case formula. In this paper we propose a scheme where the family of hash functions is chosen adaptively, based on properties of the specific input formula. Akin to adaptive importance sampling, we use solutions to the formula (found during the bounding procedure of current methods) to estimate properties of the solution set, which guides the construction of random constraints. Additionally, we introduce an orthogonal variance reduction technique that is broadly applicable to hashing based methods. We empirically show that, when combined, these approaches lead to better lower bounds on existing benchmarks, with a median improvement factor of 2^13 over 1,198 propositional formulas.

Cite this Paper


BibTeX
@InProceedings{pmlr-v115-kuck20a, title = {Adaptive Hashing for Model Counting}, author = {Kuck, Jonathan and Dao, Tri and Zhao, Shengjia and Bartan, Burak and Sabharwal, Ashish and Ermon, Stefano}, booktitle = {Proceedings of The 35th Uncertainty in Artificial Intelligence Conference}, pages = {271--280}, year = {2020}, editor = {Adams, Ryan P. and Gogate, Vibhav}, volume = {115}, series = {Proceedings of Machine Learning Research}, month = {22--25 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v115/kuck20a/kuck20a.pdf}, url = {https://proceedings.mlr.press/v115/kuck20a.html}, abstract = {Randomized hashing algorithms have seen recent success in providing bounds on the model count of a propositional formula. These methods repeatedly check the satisfiability of a formula subject to increasingly stringent random constraints. Key to these approaches is the choice of a fixed family of hash functions that strikes a good balance between computational efficiency and statistical guarantees for a hypothetical worst case formula. In this paper we propose a scheme where the family of hash functions is chosen adaptively, based on properties of the specific input formula. Akin to adaptive importance sampling, we use solutions to the formula (found during the bounding procedure of current methods) to estimate properties of the solution set, which guides the construction of random constraints. Additionally, we introduce an orthogonal variance reduction technique that is broadly applicable to hashing based methods. We empirically show that, when combined, these approaches lead to better lower bounds on existing benchmarks, with a median improvement factor of 2^13 over 1,198 propositional formulas.} }
Endnote
%0 Conference Paper %T Adaptive Hashing for Model Counting %A Jonathan Kuck %A Tri Dao %A Shengjia Zhao %A Burak Bartan %A Ashish Sabharwal %A Stefano Ermon %B Proceedings of The 35th Uncertainty in Artificial Intelligence Conference %C Proceedings of Machine Learning Research %D 2020 %E Ryan P. Adams %E Vibhav Gogate %F pmlr-v115-kuck20a %I PMLR %P 271--280 %U https://proceedings.mlr.press/v115/kuck20a.html %V 115 %X Randomized hashing algorithms have seen recent success in providing bounds on the model count of a propositional formula. These methods repeatedly check the satisfiability of a formula subject to increasingly stringent random constraints. Key to these approaches is the choice of a fixed family of hash functions that strikes a good balance between computational efficiency and statistical guarantees for a hypothetical worst case formula. In this paper we propose a scheme where the family of hash functions is chosen adaptively, based on properties of the specific input formula. Akin to adaptive importance sampling, we use solutions to the formula (found during the bounding procedure of current methods) to estimate properties of the solution set, which guides the construction of random constraints. Additionally, we introduce an orthogonal variance reduction technique that is broadly applicable to hashing based methods. We empirically show that, when combined, these approaches lead to better lower bounds on existing benchmarks, with a median improvement factor of 2^13 over 1,198 propositional formulas.
APA
Kuck, J., Dao, T., Zhao, S., Bartan, B., Sabharwal, A. & Ermon, S.. (2020). Adaptive Hashing for Model Counting. Proceedings of The 35th Uncertainty in Artificial Intelligence Conference, in Proceedings of Machine Learning Research 115:271-280 Available from https://proceedings.mlr.press/v115/kuck20a.html.

Related Material