Not All Samples Are Created Equal: Deep Learning with Importance Sampling

Angelos Katharopoulos, Francois Fleuret
Proceedings of the 35th International Conference on Machine Learning, PMLR 80:2525-2534, 2018.

Abstract

Deep Neural Network training spends most of the computation on examples that are properly handled, and could be ignored. We propose to mitigate this phenomenon with a principled importance sampling scheme that focuses computation on "informative" examples, and reduces the variance of the stochastic gradients during training. Our contribution is twofold: first, we derive a tractable upper bound to the per-sample gradient norm, and second we derive an estimator of the variance reduction achieved with importance sampling, which enables us to switch it on when it will result in an actual speedup. The resulting scheme can be used by changing a few lines of code in a standard SGD procedure, and we demonstrate experimentally on image classification, CNN fine-tuning, and RNN training, that for a fixed wall-clock time budget, it provides a reduction of the train losses of up to an order of magnitude and a relative improvement of test errors between 5% and 17%.

Cite this Paper


BibTeX
@InProceedings{pmlr-v80-katharopoulos18a, title = {Not All Samples Are Created Equal: Deep Learning with Importance Sampling}, author = {Katharopoulos, Angelos and Fleuret, Francois}, booktitle = {Proceedings of the 35th International Conference on Machine Learning}, pages = {2525--2534}, year = {2018}, editor = {Dy, Jennifer and Krause, Andreas}, volume = {80}, series = {Proceedings of Machine Learning Research}, month = {10--15 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v80/katharopoulos18a/katharopoulos18a.pdf}, url = {https://proceedings.mlr.press/v80/katharopoulos18a.html}, abstract = {Deep Neural Network training spends most of the computation on examples that are properly handled, and could be ignored. We propose to mitigate this phenomenon with a principled importance sampling scheme that focuses computation on "informative" examples, and reduces the variance of the stochastic gradients during training. Our contribution is twofold: first, we derive a tractable upper bound to the per-sample gradient norm, and second we derive an estimator of the variance reduction achieved with importance sampling, which enables us to switch it on when it will result in an actual speedup. The resulting scheme can be used by changing a few lines of code in a standard SGD procedure, and we demonstrate experimentally on image classification, CNN fine-tuning, and RNN training, that for a fixed wall-clock time budget, it provides a reduction of the train losses of up to an order of magnitude and a relative improvement of test errors between 5% and 17%.} }
Endnote
%0 Conference Paper %T Not All Samples Are Created Equal: Deep Learning with Importance Sampling %A Angelos Katharopoulos %A Francois Fleuret %B Proceedings of the 35th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2018 %E Jennifer Dy %E Andreas Krause %F pmlr-v80-katharopoulos18a %I PMLR %P 2525--2534 %U https://proceedings.mlr.press/v80/katharopoulos18a.html %V 80 %X Deep Neural Network training spends most of the computation on examples that are properly handled, and could be ignored. We propose to mitigate this phenomenon with a principled importance sampling scheme that focuses computation on "informative" examples, and reduces the variance of the stochastic gradients during training. Our contribution is twofold: first, we derive a tractable upper bound to the per-sample gradient norm, and second we derive an estimator of the variance reduction achieved with importance sampling, which enables us to switch it on when it will result in an actual speedup. The resulting scheme can be used by changing a few lines of code in a standard SGD procedure, and we demonstrate experimentally on image classification, CNN fine-tuning, and RNN training, that for a fixed wall-clock time budget, it provides a reduction of the train losses of up to an order of magnitude and a relative improvement of test errors between 5% and 17%.
APA
Katharopoulos, A. & Fleuret, F.. (2018). Not All Samples Are Created Equal: Deep Learning with Importance Sampling. Proceedings of the 35th International Conference on Machine Learning, in Proceedings of Machine Learning Research 80:2525-2534 Available from https://proceedings.mlr.press/v80/katharopoulos18a.html.

Related Material