Markov Logic Networks for Knowledge Base Completion: A Theoretical Analysis Under the MCAR Assumption

Ondřej Kuželka, Jesse Davis
Proceedings of The 35th Uncertainty in Artificial Intelligence Conference, PMLR 115:1138-1148, 2020.

Abstract

We study the following question. We are given a knowledge base in which some facts are missing. We learn the weights of a Markov logic network using maximum likelihood estimation on this knowledge base and then use the learned Markov logic network to predict the missing facts. Assuming that the facts are missing independently and with the same probability, can we say that this approach is consistent in some precise sense? This is a non-trivial question because we are learning from only one training example. In this paper we show that the answer to this question is positive.

Cite this Paper


BibTeX
@InProceedings{pmlr-v115-kuzelka20a, title = {Markov Logic Networks for Knowledge Base Completion: A Theoretical Analysis Under the MCAR Assumption}, author = {Ku\v{z}elka, Ond\v{r}ej and Davis, Jesse}, booktitle = {Proceedings of The 35th Uncertainty in Artificial Intelligence Conference}, pages = {1138--1148}, year = {2020}, editor = {Adams, Ryan P. and Gogate, Vibhav}, volume = {115}, series = {Proceedings of Machine Learning Research}, month = {22--25 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v115/kuzelka20a/kuzelka20a.pdf}, url = {https://proceedings.mlr.press/v115/kuzelka20a.html}, abstract = {We study the following question. We are given a knowledge base in which some facts are missing. We learn the weights of a Markov logic network using maximum likelihood estimation on this knowledge base and then use the learned Markov logic network to predict the missing facts. Assuming that the facts are missing independently and with the same probability, can we say that this approach is consistent in some precise sense? This is a non-trivial question because we are learning from only one training example. In this paper we show that the answer to this question is positive.} }
Endnote
%0 Conference Paper %T Markov Logic Networks for Knowledge Base Completion: A Theoretical Analysis Under the MCAR Assumption %A Ondřej Kuželka %A Jesse Davis %B Proceedings of The 35th Uncertainty in Artificial Intelligence Conference %C Proceedings of Machine Learning Research %D 2020 %E Ryan P. Adams %E Vibhav Gogate %F pmlr-v115-kuzelka20a %I PMLR %P 1138--1148 %U https://proceedings.mlr.press/v115/kuzelka20a.html %V 115 %X We study the following question. We are given a knowledge base in which some facts are missing. We learn the weights of a Markov logic network using maximum likelihood estimation on this knowledge base and then use the learned Markov logic network to predict the missing facts. Assuming that the facts are missing independently and with the same probability, can we say that this approach is consistent in some precise sense? This is a non-trivial question because we are learning from only one training example. In this paper we show that the answer to this question is positive.
APA
Kuželka, O. & Davis, J.. (2020). Markov Logic Networks for Knowledge Base Completion: A Theoretical Analysis Under the MCAR Assumption. Proceedings of The 35th Uncertainty in Artificial Intelligence Conference, in Proceedings of Machine Learning Research 115:1138-1148 Available from https://proceedings.mlr.press/v115/kuzelka20a.html.

Related Material