Estimation from Indirect Supervision with Linear Moments

Aditi Raghunathan, Roy Frostig, John Duchi, Percy Liang
Proceedings of The 33rd International Conference on Machine Learning, PMLR 48:2568-2577, 2016.

Abstract

In structured prediction problems where we have indirect supervision of the output, maximum marginal likelihood faces two computational obstacles: non-convexity of the objective and intractability of even a single gradient computation. In this paper, we bypass both obstacles for a class of what we call linear indirectly-supervised problems. Our approach is simple: we solve a linear system to estimate sufficient statistics of the model, which we then use to estimate parameters via convex optimization. We analyze the statistical properties of our approach and show empirically that it is effective in two settings: learning with local privacy constraints and learning from low-cost count-based annotations.

Cite this Paper


BibTeX
@InProceedings{pmlr-v48-raghunathan16, title = {Estimation from Indirect Supervision with Linear Moments}, author = {Raghunathan, Aditi and Frostig, Roy and Duchi, John and Liang, Percy}, booktitle = {Proceedings of The 33rd International Conference on Machine Learning}, pages = {2568--2577}, year = {2016}, editor = {Balcan, Maria Florina and Weinberger, Kilian Q.}, volume = {48}, series = {Proceedings of Machine Learning Research}, address = {New York, New York, USA}, month = {20--22 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v48/raghunathan16.pdf}, url = {https://proceedings.mlr.press/v48/raghunathan16.html}, abstract = {In structured prediction problems where we have indirect supervision of the output, maximum marginal likelihood faces two computational obstacles: non-convexity of the objective and intractability of even a single gradient computation. In this paper, we bypass both obstacles for a class of what we call linear indirectly-supervised problems. Our approach is simple: we solve a linear system to estimate sufficient statistics of the model, which we then use to estimate parameters via convex optimization. We analyze the statistical properties of our approach and show empirically that it is effective in two settings: learning with local privacy constraints and learning from low-cost count-based annotations.} }
Endnote
%0 Conference Paper %T Estimation from Indirect Supervision with Linear Moments %A Aditi Raghunathan %A Roy Frostig %A John Duchi %A Percy Liang %B Proceedings of The 33rd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2016 %E Maria Florina Balcan %E Kilian Q. Weinberger %F pmlr-v48-raghunathan16 %I PMLR %P 2568--2577 %U https://proceedings.mlr.press/v48/raghunathan16.html %V 48 %X In structured prediction problems where we have indirect supervision of the output, maximum marginal likelihood faces two computational obstacles: non-convexity of the objective and intractability of even a single gradient computation. In this paper, we bypass both obstacles for a class of what we call linear indirectly-supervised problems. Our approach is simple: we solve a linear system to estimate sufficient statistics of the model, which we then use to estimate parameters via convex optimization. We analyze the statistical properties of our approach and show empirically that it is effective in two settings: learning with local privacy constraints and learning from low-cost count-based annotations.
RIS
TY - CPAPER TI - Estimation from Indirect Supervision with Linear Moments AU - Aditi Raghunathan AU - Roy Frostig AU - John Duchi AU - Percy Liang BT - Proceedings of The 33rd International Conference on Machine Learning DA - 2016/06/11 ED - Maria Florina Balcan ED - Kilian Q. Weinberger ID - pmlr-v48-raghunathan16 PB - PMLR DP - Proceedings of Machine Learning Research VL - 48 SP - 2568 EP - 2577 L1 - http://proceedings.mlr.press/v48/raghunathan16.pdf UR - https://proceedings.mlr.press/v48/raghunathan16.html AB - In structured prediction problems where we have indirect supervision of the output, maximum marginal likelihood faces two computational obstacles: non-convexity of the objective and intractability of even a single gradient computation. In this paper, we bypass both obstacles for a class of what we call linear indirectly-supervised problems. Our approach is simple: we solve a linear system to estimate sufficient statistics of the model, which we then use to estimate parameters via convex optimization. We analyze the statistical properties of our approach and show empirically that it is effective in two settings: learning with local privacy constraints and learning from low-cost count-based annotations. ER -
APA
Raghunathan, A., Frostig, R., Duchi, J. & Liang, P.. (2016). Estimation from Indirect Supervision with Linear Moments. Proceedings of The 33rd International Conference on Machine Learning, in Proceedings of Machine Learning Research 48:2568-2577 Available from https://proceedings.mlr.press/v48/raghunathan16.html.

Related Material