Lifted Weight Learning of Markov Logic Networks (Revisited One More Time)

Ondrej Kuzelka, Vyacheslav Kungurtsev, Yuyi Wang
Proceedings of the 10th International Conference on Probabilistic Graphical Models, PMLR 138:269-280, 2020.

Abstract

We revisit the problem of lifted weight learning of Markov logic networks (MLNs). We show that there is an algorithm for maximum-likelihood learning which runs in time polynomial in the size of the domain, whenever the partition function of the given MLN can be computed in polynomial time. This improves on our recent results where we showed the same result with the additional dependency of the runtime on a parameter of the training data, called interiority, which measures how “extreme” the given training data are. In this work, we get rid of this dependency. The main new technical ingredient that we exploit are theoretical results obtained recently by Straszak and Vishnoi (Maximum Entropy Distributions: Bit Complexity and Stability, COLT 2019).

Cite this Paper


BibTeX
@InProceedings{pmlr-v138-kuzelka20a, title = {Lifted Weight Learning of Markov Logic Networks (Revisited One More Time)}, author = {Kuzelka, Ondrej and Kungurtsev, Vyacheslav and Wang, Yuyi}, booktitle = {Proceedings of the 10th International Conference on Probabilistic Graphical Models}, pages = {269--280}, year = {2020}, editor = {Jaeger, Manfred and Nielsen, Thomas Dyhre}, volume = {138}, series = {Proceedings of Machine Learning Research}, month = {23--25 Sep}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v138/kuzelka20a/kuzelka20a.pdf}, url = {https://proceedings.mlr.press/v138/kuzelka20a.html}, abstract = {We revisit the problem of lifted weight learning of Markov logic networks (MLNs). We show that there is an algorithm for maximum-likelihood learning which runs in time polynomial in the size of the domain, whenever the partition function of the given MLN can be computed in polynomial time. This improves on our recent results where we showed the same result with the additional dependency of the runtime on a parameter of the training data, called interiority, which measures how “extreme” the given training data are. In this work, we get rid of this dependency. The main new technical ingredient that we exploit are theoretical results obtained recently by Straszak and Vishnoi (Maximum Entropy Distributions: Bit Complexity and Stability, COLT 2019).} }
Endnote
%0 Conference Paper %T Lifted Weight Learning of Markov Logic Networks (Revisited One More Time) %A Ondrej Kuzelka %A Vyacheslav Kungurtsev %A Yuyi Wang %B Proceedings of the 10th International Conference on Probabilistic Graphical Models %C Proceedings of Machine Learning Research %D 2020 %E Manfred Jaeger %E Thomas Dyhre Nielsen %F pmlr-v138-kuzelka20a %I PMLR %P 269--280 %U https://proceedings.mlr.press/v138/kuzelka20a.html %V 138 %X We revisit the problem of lifted weight learning of Markov logic networks (MLNs). We show that there is an algorithm for maximum-likelihood learning which runs in time polynomial in the size of the domain, whenever the partition function of the given MLN can be computed in polynomial time. This improves on our recent results where we showed the same result with the additional dependency of the runtime on a parameter of the training data, called interiority, which measures how “extreme” the given training data are. In this work, we get rid of this dependency. The main new technical ingredient that we exploit are theoretical results obtained recently by Straszak and Vishnoi (Maximum Entropy Distributions: Bit Complexity and Stability, COLT 2019).
APA
Kuzelka, O., Kungurtsev, V. & Wang, Y.. (2020). Lifted Weight Learning of Markov Logic Networks (Revisited One More Time). Proceedings of the 10th International Conference on Probabilistic Graphical Models, in Proceedings of Machine Learning Research 138:269-280 Available from https://proceedings.mlr.press/v138/kuzelka20a.html.

Related Material