Context-Aware Online Collective Inference for Templated Graphical Models

Charles Dickens, Connor Pryor, Eriq Augustine, Alexander Miller, Lise Getoor
Proceedings of the 38th International Conference on Machine Learning, PMLR 139:2707-2716, 2021.

Abstract

In this work, we examine online collective inference, the problem of maintaining and performing inference over a sequence of evolving graphical models. We utilize templated graphical models (TGM), a general class of graphical models expressed via templates and instantiated with data. A key challenge is minimizing the cost of instantiating the updated model. To address this, we define a class of exact and approximate context-aware methods for updating an existing TGM. These methods avoid a full re-instantiation by using the context of the updates to only add relevant components to the graphical model. Further, we provide stability bounds for the general online inference problem and regret bounds for a proposed approximation. Finally, we implement our approach in probabilistic soft logic, and test it on several online collective inference tasks. Through these experiments we verify the bounds on regret and stability, and show that our approximate online approach consistently runs two to five times faster than the offline alternative while, surprisingly, maintaining the quality of the predictions.

Cite this Paper


BibTeX
@InProceedings{pmlr-v139-dickens21a, title = {Context-Aware Online Collective Inference for Templated Graphical Models}, author = {Dickens, Charles and Pryor, Connor and Augustine, Eriq and Miller, Alexander and Getoor, Lise}, booktitle = {Proceedings of the 38th International Conference on Machine Learning}, pages = {2707--2716}, year = {2021}, editor = {Meila, Marina and Zhang, Tong}, volume = {139}, series = {Proceedings of Machine Learning Research}, month = {18--24 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v139/dickens21a/dickens21a.pdf}, url = {https://proceedings.mlr.press/v139/dickens21a.html}, abstract = {In this work, we examine online collective inference, the problem of maintaining and performing inference over a sequence of evolving graphical models. We utilize templated graphical models (TGM), a general class of graphical models expressed via templates and instantiated with data. A key challenge is minimizing the cost of instantiating the updated model. To address this, we define a class of exact and approximate context-aware methods for updating an existing TGM. These methods avoid a full re-instantiation by using the context of the updates to only add relevant components to the graphical model. Further, we provide stability bounds for the general online inference problem and regret bounds for a proposed approximation. Finally, we implement our approach in probabilistic soft logic, and test it on several online collective inference tasks. Through these experiments we verify the bounds on regret and stability, and show that our approximate online approach consistently runs two to five times faster than the offline alternative while, surprisingly, maintaining the quality of the predictions.} }
Endnote
%0 Conference Paper %T Context-Aware Online Collective Inference for Templated Graphical Models %A Charles Dickens %A Connor Pryor %A Eriq Augustine %A Alexander Miller %A Lise Getoor %B Proceedings of the 38th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2021 %E Marina Meila %E Tong Zhang %F pmlr-v139-dickens21a %I PMLR %P 2707--2716 %U https://proceedings.mlr.press/v139/dickens21a.html %V 139 %X In this work, we examine online collective inference, the problem of maintaining and performing inference over a sequence of evolving graphical models. We utilize templated graphical models (TGM), a general class of graphical models expressed via templates and instantiated with data. A key challenge is minimizing the cost of instantiating the updated model. To address this, we define a class of exact and approximate context-aware methods for updating an existing TGM. These methods avoid a full re-instantiation by using the context of the updates to only add relevant components to the graphical model. Further, we provide stability bounds for the general online inference problem and regret bounds for a proposed approximation. Finally, we implement our approach in probabilistic soft logic, and test it on several online collective inference tasks. Through these experiments we verify the bounds on regret and stability, and show that our approximate online approach consistently runs two to five times faster than the offline alternative while, surprisingly, maintaining the quality of the predictions.
APA
Dickens, C., Pryor, C., Augustine, E., Miller, A. & Getoor, L.. (2021). Context-Aware Online Collective Inference for Templated Graphical Models. Proceedings of the 38th International Conference on Machine Learning, in Proceedings of Machine Learning Research 139:2707-2716 Available from https://proceedings.mlr.press/v139/dickens21a.html.

Related Material