Efficient Weight Learning in HighDimensional Untied MLNs
[edit]
Proceedings of the TwentyFirst International Conference on Artificial Intelligence and Statistics, PMLR 84:16371645, 2018.
Abstract
Existing techniques for improving scalability of weight learning in Markov Logic Networks (MLNs) are typically effective when the parameters of the MLN are tied, i.e., several ground formulas in the MLN share the same weight. However, to improve accuracy in realworld problems, we typically need to learn separate weights for different groundings of the MLN. In this paper, we present an approach to perform efficient weight learning in MLNs containing highdimensional, untied formulas. The fundamental idea in our approach is to help the learning algorithm navigate the parameter searchspace more efficiently by a) tying together groundings of untied formulas that are likely to have similar weights, and b) setting good initial values for the parameters. To do this, we follow a hierarchical approach, where we first learn the parameters that are to be tied using a nonrelational learner. We then use a relational learner to learn the tiedparameter MLN with initial values derived from parameters learned by the nonrelational learner. We illustrate the promise of our approach on three different realworld problems and show that our approach yields much more scalable and accurate results compared to existing stateoftheart relational learning systems.
Related Material


