Random Reshuffling with Variance Reduction: New Analysis and Better Rates

Grigory Malinovsky, Alibek Sailanbayev, Peter Richtárik
Proceedings of the Thirty-Ninth Conference on Uncertainty in Artificial Intelligence, PMLR 216:1347-1357, 2023.

Abstract

Virtually all state-of-the-art methods for training supervised machine learning models are variants of Stochastic Gradient Descent (SGD), enhanced with a number of additional tricks, such as minibatching, momentum, and adaptive stepsizes. However, one of the most basic questions in the design of successful SGD methods, one that is orthogonal to the aforementioned tricks, is the choice of the next training data point to be learning from. Standard variants of SGD employ a sampling with replacement strategy, which means that the next training data point is sampled from the entire data set, often independently of all previous samples. While standard SGD is well understood theoretically, virtually all widely used machine learning software is based on sampling without replacement as this is often empirically superior. That is, the training data is randomly shuffled/permuted, either only once at the beginning, strategy known as random shuffling (RS), or before every epoch, strategy known as random reshuffling (RR), and training proceeds in the data order dictated by the shuffling. RS and RR strategies have for a long time remained beyond the reach of theoretical analysis that would satisfactorily explain their success. However, very recently, Mishchenko et al. [2020] provided tight sublinear convergence rates through a novel analysis, and showed that these strategies can improve upon standard SGD in certain regimes. Inspired by these results, we seek to further improve the rates of shuffling-based methods. In particular, we show that it is possible to enhance them with a variance reduction mechanism, obtaining linear convergence rates. To the best of our knowledge, our linear convergence rates are the best for any method based on sampling without replacement.

Cite this Paper


BibTeX
@InProceedings{pmlr-v216-malinovsky23a, title = {Random Reshuffling with Variance Reduction: New Analysis and Better Rates}, author = {Malinovsky, Grigory and Sailanbayev, Alibek and Richt\'{a}rik, Peter}, booktitle = {Proceedings of the Thirty-Ninth Conference on Uncertainty in Artificial Intelligence}, pages = {1347--1357}, year = {2023}, editor = {Evans, Robin J. and Shpitser, Ilya}, volume = {216}, series = {Proceedings of Machine Learning Research}, month = {31 Jul--04 Aug}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v216/malinovsky23a/malinovsky23a.pdf}, url = {https://proceedings.mlr.press/v216/malinovsky23a.html}, abstract = {Virtually all state-of-the-art methods for training supervised machine learning models are variants of Stochastic Gradient Descent (SGD), enhanced with a number of additional tricks, such as minibatching, momentum, and adaptive stepsizes. However, one of the most basic questions in the design of successful SGD methods, one that is orthogonal to the aforementioned tricks, is the choice of the next training data point to be learning from. Standard variants of SGD employ a sampling with replacement strategy, which means that the next training data point is sampled from the entire data set, often independently of all previous samples. While standard SGD is well understood theoretically, virtually all widely used machine learning software is based on sampling without replacement as this is often empirically superior. That is, the training data is randomly shuffled/permuted, either only once at the beginning, strategy known as random shuffling (RS), or before every epoch, strategy known as random reshuffling (RR), and training proceeds in the data order dictated by the shuffling. RS and RR strategies have for a long time remained beyond the reach of theoretical analysis that would satisfactorily explain their success. However, very recently, Mishchenko et al. [2020] provided tight sublinear convergence rates through a novel analysis, and showed that these strategies can improve upon standard SGD in certain regimes. Inspired by these results, we seek to further improve the rates of shuffling-based methods. In particular, we show that it is possible to enhance them with a variance reduction mechanism, obtaining linear convergence rates. To the best of our knowledge, our linear convergence rates are the best for any method based on sampling without replacement.} }
Endnote
%0 Conference Paper %T Random Reshuffling with Variance Reduction: New Analysis and Better Rates %A Grigory Malinovsky %A Alibek Sailanbayev %A Peter Richtárik %B Proceedings of the Thirty-Ninth Conference on Uncertainty in Artificial Intelligence %C Proceedings of Machine Learning Research %D 2023 %E Robin J. Evans %E Ilya Shpitser %F pmlr-v216-malinovsky23a %I PMLR %P 1347--1357 %U https://proceedings.mlr.press/v216/malinovsky23a.html %V 216 %X Virtually all state-of-the-art methods for training supervised machine learning models are variants of Stochastic Gradient Descent (SGD), enhanced with a number of additional tricks, such as minibatching, momentum, and adaptive stepsizes. However, one of the most basic questions in the design of successful SGD methods, one that is orthogonal to the aforementioned tricks, is the choice of the next training data point to be learning from. Standard variants of SGD employ a sampling with replacement strategy, which means that the next training data point is sampled from the entire data set, often independently of all previous samples. While standard SGD is well understood theoretically, virtually all widely used machine learning software is based on sampling without replacement as this is often empirically superior. That is, the training data is randomly shuffled/permuted, either only once at the beginning, strategy known as random shuffling (RS), or before every epoch, strategy known as random reshuffling (RR), and training proceeds in the data order dictated by the shuffling. RS and RR strategies have for a long time remained beyond the reach of theoretical analysis that would satisfactorily explain their success. However, very recently, Mishchenko et al. [2020] provided tight sublinear convergence rates through a novel analysis, and showed that these strategies can improve upon standard SGD in certain regimes. Inspired by these results, we seek to further improve the rates of shuffling-based methods. In particular, we show that it is possible to enhance them with a variance reduction mechanism, obtaining linear convergence rates. To the best of our knowledge, our linear convergence rates are the best for any method based on sampling without replacement.
APA
Malinovsky, G., Sailanbayev, A. & Richtárik, P.. (2023). Random Reshuffling with Variance Reduction: New Analysis and Better Rates. Proceedings of the Thirty-Ninth Conference on Uncertainty in Artificial Intelligence, in Proceedings of Machine Learning Research 216:1347-1357 Available from https://proceedings.mlr.press/v216/malinovsky23a.html.

Related Material