Coresets for Data-efficient Training of Machine Learning Models

Baharan Mirzasoleiman, Jeff Bilmes, Jure Leskovec
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:6950-6960, 2020.

Abstract

Incremental gradient (IG) methods, such as stochastic gradient descent and its variants are commonly used for large scale optimization in machine learning. Despite the sustained effort to make IG methods more data-efficient, it remains an open question how to select a training data subset that can theoretically and practically perform on par with the full dataset. Here we develop CRAIG, a method to select a weighted subset (or coreset) of training data that closely estimates the full gradient by maximizing a submodular function. We prove that applying IG to this subset is guaranteed to converge to the (near)optimal solution with the same convergence rate as that of IG for convex optimization. As a result, CRAIG achieves a speedup that is inversely proportional to the size of the subset. To our knowledge, this is the first rigorous method for data-efficient training of general machine learning models. Our extensive set of experiments show that CRAIG, while achieving practically the same solution, speeds up various IG methods by up to 6x for logistic regression and 3x for training deep neural networks.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-mirzasoleiman20a, title = {Coresets for Data-efficient Training of Machine Learning Models}, author = {Mirzasoleiman, Baharan and Bilmes, Jeff and Leskovec, Jure}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {6950--6960}, year = {2020}, editor = {III, Hal Daumé and Singh, Aarti}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/mirzasoleiman20a/mirzasoleiman20a.pdf}, url = {https://proceedings.mlr.press/v119/mirzasoleiman20a.html}, abstract = {Incremental gradient (IG) methods, such as stochastic gradient descent and its variants are commonly used for large scale optimization in machine learning. Despite the sustained effort to make IG methods more data-efficient, it remains an open question how to select a training data subset that can theoretically and practically perform on par with the full dataset. Here we develop CRAIG, a method to select a weighted subset (or coreset) of training data that closely estimates the full gradient by maximizing a submodular function. We prove that applying IG to this subset is guaranteed to converge to the (near)optimal solution with the same convergence rate as that of IG for convex optimization. As a result, CRAIG achieves a speedup that is inversely proportional to the size of the subset. To our knowledge, this is the first rigorous method for data-efficient training of general machine learning models. Our extensive set of experiments show that CRAIG, while achieving practically the same solution, speeds up various IG methods by up to 6x for logistic regression and 3x for training deep neural networks.} }
Endnote
%0 Conference Paper %T Coresets for Data-efficient Training of Machine Learning Models %A Baharan Mirzasoleiman %A Jeff Bilmes %A Jure Leskovec %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-mirzasoleiman20a %I PMLR %P 6950--6960 %U https://proceedings.mlr.press/v119/mirzasoleiman20a.html %V 119 %X Incremental gradient (IG) methods, such as stochastic gradient descent and its variants are commonly used for large scale optimization in machine learning. Despite the sustained effort to make IG methods more data-efficient, it remains an open question how to select a training data subset that can theoretically and practically perform on par with the full dataset. Here we develop CRAIG, a method to select a weighted subset (or coreset) of training data that closely estimates the full gradient by maximizing a submodular function. We prove that applying IG to this subset is guaranteed to converge to the (near)optimal solution with the same convergence rate as that of IG for convex optimization. As a result, CRAIG achieves a speedup that is inversely proportional to the size of the subset. To our knowledge, this is the first rigorous method for data-efficient training of general machine learning models. Our extensive set of experiments show that CRAIG, while achieving practically the same solution, speeds up various IG methods by up to 6x for logistic regression and 3x for training deep neural networks.
APA
Mirzasoleiman, B., Bilmes, J. & Leskovec, J.. (2020). Coresets for Data-efficient Training of Machine Learning Models. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:6950-6960 Available from https://proceedings.mlr.press/v119/mirzasoleiman20a.html.

Related Material