Reified Context Models

Jacob Steinhardt, Percy Liang
Proceedings of the 32nd International Conference on Machine Learning, PMLR 37:1043-1052, 2015.

Abstract

A classic tension exists between exact inference in a simple model and approximate inference in a complex model. The latter offers expressivity and thus accuracy, but the former provides coverage of the space, an important property for confidence estimation and learning with indirect supervision. In this work, we introduce a new approach, reified context models, to reconcile this tension. Specifically, we let the choice of factors in a graphical model (the contexts) be random variables inside the model itself. In this sense, the contexts are reified and can be chosen in a data-dependent way. Empirically, we show that our approach obtains expressivity and coverage on three sequence modeling tasks.

Cite this Paper


BibTeX
@InProceedings{pmlr-v37-steinhardta15, title = {Reified Context Models}, author = {Steinhardt, Jacob and Liang, Percy}, booktitle = {Proceedings of the 32nd International Conference on Machine Learning}, pages = {1043--1052}, year = {2015}, editor = {Bach, Francis and Blei, David}, volume = {37}, series = {Proceedings of Machine Learning Research}, address = {Lille, France}, month = {07--09 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v37/steinhardta15.pdf}, url = {https://proceedings.mlr.press/v37/steinhardta15.html}, abstract = {A classic tension exists between exact inference in a simple model and approximate inference in a complex model. The latter offers expressivity and thus accuracy, but the former provides coverage of the space, an important property for confidence estimation and learning with indirect supervision. In this work, we introduce a new approach, reified context models, to reconcile this tension. Specifically, we let the choice of factors in a graphical model (the contexts) be random variables inside the model itself. In this sense, the contexts are reified and can be chosen in a data-dependent way. Empirically, we show that our approach obtains expressivity and coverage on three sequence modeling tasks.} }
Endnote
%0 Conference Paper %T Reified Context Models %A Jacob Steinhardt %A Percy Liang %B Proceedings of the 32nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2015 %E Francis Bach %E David Blei %F pmlr-v37-steinhardta15 %I PMLR %P 1043--1052 %U https://proceedings.mlr.press/v37/steinhardta15.html %V 37 %X A classic tension exists between exact inference in a simple model and approximate inference in a complex model. The latter offers expressivity and thus accuracy, but the former provides coverage of the space, an important property for confidence estimation and learning with indirect supervision. In this work, we introduce a new approach, reified context models, to reconcile this tension. Specifically, we let the choice of factors in a graphical model (the contexts) be random variables inside the model itself. In this sense, the contexts are reified and can be chosen in a data-dependent way. Empirically, we show that our approach obtains expressivity and coverage on three sequence modeling tasks.
RIS
TY - CPAPER TI - Reified Context Models AU - Jacob Steinhardt AU - Percy Liang BT - Proceedings of the 32nd International Conference on Machine Learning DA - 2015/06/01 ED - Francis Bach ED - David Blei ID - pmlr-v37-steinhardta15 PB - PMLR DP - Proceedings of Machine Learning Research VL - 37 SP - 1043 EP - 1052 L1 - http://proceedings.mlr.press/v37/steinhardta15.pdf UR - https://proceedings.mlr.press/v37/steinhardta15.html AB - A classic tension exists between exact inference in a simple model and approximate inference in a complex model. The latter offers expressivity and thus accuracy, but the former provides coverage of the space, an important property for confidence estimation and learning with indirect supervision. In this work, we introduce a new approach, reified context models, to reconcile this tension. Specifically, we let the choice of factors in a graphical model (the contexts) be random variables inside the model itself. In this sense, the contexts are reified and can be chosen in a data-dependent way. Empirically, we show that our approach obtains expressivity and coverage on three sequence modeling tasks. ER -
APA
Steinhardt, J. & Liang, P.. (2015). Reified Context Models. Proceedings of the 32nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 37:1043-1052 Available from https://proceedings.mlr.press/v37/steinhardta15.html.

Related Material