Augmenting and Tuning Knowledge Graph Embeddings

Robert Bamler, Farnood Salehi, Stephan Mandt
Proceedings of The 35th Uncertainty in Artificial Intelligence Conference, PMLR 115:508-518, 2020.

Abstract

Knowledge graph embeddings rank among the most successful methods for link prediction in knowledge graphs, i.e., the task of completing an incomplete collection of relational facts. A downside of these models is their strong sensitivity to model hyperparameters, in particular regularizers, which have to be extensively tuned to reach good performance [Kadlec et al., 2017]. We propose an efficient method for large scale hyperparameter tuning by interpreting these models in a probabilistic framework. After a model augmentation that introduces per-entity hyperparameters, we use a variational expectation-maximization approach to tune thousands of such hyperparameters with minimal additional cost. Our approach is agnostic to details of the model and results in a new state of the art in link prediction on standard benchmark data.

Cite this Paper


BibTeX
@InProceedings{pmlr-v115-bamler20a, title = {Augmenting and Tuning Knowledge Graph Embeddings}, author = {Bamler, Robert and Salehi, Farnood and Mandt, Stephan}, booktitle = {Proceedings of The 35th Uncertainty in Artificial Intelligence Conference}, pages = {508--518}, year = {2020}, editor = {Adams, Ryan P. and Gogate, Vibhav}, volume = {115}, series = {Proceedings of Machine Learning Research}, month = {22--25 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v115/bamler20a/bamler20a.pdf}, url = {https://proceedings.mlr.press/v115/bamler20a.html}, abstract = {Knowledge graph embeddings rank among the most successful methods for link prediction in knowledge graphs, i.e., the task of completing an incomplete collection of relational facts. A downside of these models is their strong sensitivity to model hyperparameters, in particular regularizers, which have to be extensively tuned to reach good performance [Kadlec et al., 2017]. We propose an efficient method for large scale hyperparameter tuning by interpreting these models in a probabilistic framework. After a model augmentation that introduces per-entity hyperparameters, we use a variational expectation-maximization approach to tune thousands of such hyperparameters with minimal additional cost. Our approach is agnostic to details of the model and results in a new state of the art in link prediction on standard benchmark data.} }
Endnote
%0 Conference Paper %T Augmenting and Tuning Knowledge Graph Embeddings %A Robert Bamler %A Farnood Salehi %A Stephan Mandt %B Proceedings of The 35th Uncertainty in Artificial Intelligence Conference %C Proceedings of Machine Learning Research %D 2020 %E Ryan P. Adams %E Vibhav Gogate %F pmlr-v115-bamler20a %I PMLR %P 508--518 %U https://proceedings.mlr.press/v115/bamler20a.html %V 115 %X Knowledge graph embeddings rank among the most successful methods for link prediction in knowledge graphs, i.e., the task of completing an incomplete collection of relational facts. A downside of these models is their strong sensitivity to model hyperparameters, in particular regularizers, which have to be extensively tuned to reach good performance [Kadlec et al., 2017]. We propose an efficient method for large scale hyperparameter tuning by interpreting these models in a probabilistic framework. After a model augmentation that introduces per-entity hyperparameters, we use a variational expectation-maximization approach to tune thousands of such hyperparameters with minimal additional cost. Our approach is agnostic to details of the model and results in a new state of the art in link prediction on standard benchmark data.
APA
Bamler, R., Salehi, F. & Mandt, S.. (2020). Augmenting and Tuning Knowledge Graph Embeddings. Proceedings of The 35th Uncertainty in Artificial Intelligence Conference, in Proceedings of Machine Learning Research 115:508-518 Available from https://proceedings.mlr.press/v115/bamler20a.html.

Related Material