Domain-Size Aware Markov Logic Networks

Happy Mittal, Ayush Bhardwaj, Vibhav Gogate, Parag Singla
Proceedings of the Twenty-Second International Conference on Artificial Intelligence and Statistics, PMLR 89:3216-3224, 2019.

Abstract

Several domains in AI need to represent the relational structure as well as model uncertainty. Markov Logic is a powerful formalism which achieves this by attaching weights to formulas in finite first-order logic. Though Markov Logic Networks (MLNs) have been used for a wide variety of applications, a significant challenge remains that weights do not generalize well when training domain sizes are different from those seen during testing. In particular, it has been observed that marginal probabilities tend to extremes in the limit of increasing domain sizes. As the first contribution of our work, we further characterize the distribution and show that marginal probabilities tend to a constant independent of weights and not always to extremes as was previously observed. As our second contribution, we present a principled solution to this problem by defining Domain-size Aware Markov Logic Networks (DA-MLNs) which can be seen as re-parameterizing the MLNs after taking domain size into consideration. For some simple but representative MLN formulas, we formally prove that probabilities defined by DA-MLNs are well behaved. On a practical side, DA-MLNs allow us to generalize the weights learned over small-sized training data to much larger domains. Experiments on three different benchmark MLNs show that our approach results in significant performance gains compared to existing methods.

Cite this Paper


BibTeX
@InProceedings{pmlr-v89-mittal19a, title = {Domain-Size Aware Markov Logic Networks}, author = {Mittal, Happy and Bhardwaj, Ayush and Gogate, Vibhav and Singla, Parag}, booktitle = {Proceedings of the Twenty-Second International Conference on Artificial Intelligence and Statistics}, pages = {3216--3224}, year = {2019}, editor = {Chaudhuri, Kamalika and Sugiyama, Masashi}, volume = {89}, series = {Proceedings of Machine Learning Research}, month = {16--18 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v89/mittal19a/mittal19a.pdf}, url = {https://proceedings.mlr.press/v89/mittal19a.html}, abstract = {Several domains in AI need to represent the relational structure as well as model uncertainty. Markov Logic is a powerful formalism which achieves this by attaching weights to formulas in finite first-order logic. Though Markov Logic Networks (MLNs) have been used for a wide variety of applications, a significant challenge remains that weights do not generalize well when training domain sizes are different from those seen during testing. In particular, it has been observed that marginal probabilities tend to extremes in the limit of increasing domain sizes. As the first contribution of our work, we further characterize the distribution and show that marginal probabilities tend to a constant independent of weights and not always to extremes as was previously observed. As our second contribution, we present a principled solution to this problem by defining Domain-size Aware Markov Logic Networks (DA-MLNs) which can be seen as re-parameterizing the MLNs after taking domain size into consideration. For some simple but representative MLN formulas, we formally prove that probabilities defined by DA-MLNs are well behaved. On a practical side, DA-MLNs allow us to generalize the weights learned over small-sized training data to much larger domains. Experiments on three different benchmark MLNs show that our approach results in significant performance gains compared to existing methods.} }
Endnote
%0 Conference Paper %T Domain-Size Aware Markov Logic Networks %A Happy Mittal %A Ayush Bhardwaj %A Vibhav Gogate %A Parag Singla %B Proceedings of the Twenty-Second International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2019 %E Kamalika Chaudhuri %E Masashi Sugiyama %F pmlr-v89-mittal19a %I PMLR %P 3216--3224 %U https://proceedings.mlr.press/v89/mittal19a.html %V 89 %X Several domains in AI need to represent the relational structure as well as model uncertainty. Markov Logic is a powerful formalism which achieves this by attaching weights to formulas in finite first-order logic. Though Markov Logic Networks (MLNs) have been used for a wide variety of applications, a significant challenge remains that weights do not generalize well when training domain sizes are different from those seen during testing. In particular, it has been observed that marginal probabilities tend to extremes in the limit of increasing domain sizes. As the first contribution of our work, we further characterize the distribution and show that marginal probabilities tend to a constant independent of weights and not always to extremes as was previously observed. As our second contribution, we present a principled solution to this problem by defining Domain-size Aware Markov Logic Networks (DA-MLNs) which can be seen as re-parameterizing the MLNs after taking domain size into consideration. For some simple but representative MLN formulas, we formally prove that probabilities defined by DA-MLNs are well behaved. On a practical side, DA-MLNs allow us to generalize the weights learned over small-sized training data to much larger domains. Experiments on three different benchmark MLNs show that our approach results in significant performance gains compared to existing methods.
APA
Mittal, H., Bhardwaj, A., Gogate, V. & Singla, P.. (2019). Domain-Size Aware Markov Logic Networks. Proceedings of the Twenty-Second International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 89:3216-3224 Available from https://proceedings.mlr.press/v89/mittal19a.html.

Related Material