Learning Dependency Structures for Weak Supervision Models

Paroma Varma, Frederic Sala, Ann He, Alexander Ratner, Christopher Re
Proceedings of the 36th International Conference on Machine Learning, PMLR 97:6418-6427, 2019.

Abstract

Labeling training data is a key bottleneck in the modern machine learning pipeline. Recent weak supervision approaches combine labels from multiple noisy sources by estimating their accuracies without access to ground truth labels; however, estimating the dependencies among these sources is a critical challenge. We focus on a robust PCA-based algorithm for learning these dependency structures, establish improved theoretical recovery rates, and outperform existing methods on various real-world tasks. Under certain conditions, we show that the amount of unlabeled data needed can scale sublinearly or even logarithmically with the number of sources m, improving over previous efforts that ignore the sparsity pattern in the dependency structure and scale linearly in m. We provide an information-theoretic lower bound on the minimum sample complexity of the weak supervision setting. Our method outperforms weak supervision approaches that assume conditionally-independent sources by up to 4.64 F1 points and previous structure learning approaches by up to 4.41 F1 points on real-world relation extraction and image classification tasks.

Cite this Paper


BibTeX
@InProceedings{pmlr-v97-varma19a, title = {Learning Dependency Structures for Weak Supervision Models}, author = {Varma, Paroma and Sala, Frederic and He, Ann and Ratner, Alexander and Re, Christopher}, booktitle = {Proceedings of the 36th International Conference on Machine Learning}, pages = {6418--6427}, year = {2019}, editor = {Chaudhuri, Kamalika and Salakhutdinov, Ruslan}, volume = {97}, series = {Proceedings of Machine Learning Research}, month = {09--15 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v97/varma19a/varma19a.pdf}, url = {https://proceedings.mlr.press/v97/varma19a.html}, abstract = {Labeling training data is a key bottleneck in the modern machine learning pipeline. Recent weak supervision approaches combine labels from multiple noisy sources by estimating their accuracies without access to ground truth labels; however, estimating the dependencies among these sources is a critical challenge. We focus on a robust PCA-based algorithm for learning these dependency structures, establish improved theoretical recovery rates, and outperform existing methods on various real-world tasks. Under certain conditions, we show that the amount of unlabeled data needed can scale sublinearly or even logarithmically with the number of sources m, improving over previous efforts that ignore the sparsity pattern in the dependency structure and scale linearly in m. We provide an information-theoretic lower bound on the minimum sample complexity of the weak supervision setting. Our method outperforms weak supervision approaches that assume conditionally-independent sources by up to 4.64 F1 points and previous structure learning approaches by up to 4.41 F1 points on real-world relation extraction and image classification tasks.} }
Endnote
%0 Conference Paper %T Learning Dependency Structures for Weak Supervision Models %A Paroma Varma %A Frederic Sala %A Ann He %A Alexander Ratner %A Christopher Re %B Proceedings of the 36th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2019 %E Kamalika Chaudhuri %E Ruslan Salakhutdinov %F pmlr-v97-varma19a %I PMLR %P 6418--6427 %U https://proceedings.mlr.press/v97/varma19a.html %V 97 %X Labeling training data is a key bottleneck in the modern machine learning pipeline. Recent weak supervision approaches combine labels from multiple noisy sources by estimating their accuracies without access to ground truth labels; however, estimating the dependencies among these sources is a critical challenge. We focus on a robust PCA-based algorithm for learning these dependency structures, establish improved theoretical recovery rates, and outperform existing methods on various real-world tasks. Under certain conditions, we show that the amount of unlabeled data needed can scale sublinearly or even logarithmically with the number of sources m, improving over previous efforts that ignore the sparsity pattern in the dependency structure and scale linearly in m. We provide an information-theoretic lower bound on the minimum sample complexity of the weak supervision setting. Our method outperforms weak supervision approaches that assume conditionally-independent sources by up to 4.64 F1 points and previous structure learning approaches by up to 4.41 F1 points on real-world relation extraction and image classification tasks.
APA
Varma, P., Sala, F., He, A., Ratner, A. & Re, C.. (2019). Learning Dependency Structures for Weak Supervision Models. Proceedings of the 36th International Conference on Machine Learning, in Proceedings of Machine Learning Research 97:6418-6427 Available from https://proceedings.mlr.press/v97/varma19a.html.

Related Material