Understanding Progressive Training Through the Framework of Randomized Coordinate Descent

Rafał Szlendak, Elnur Gasanov, Peter Richtarik
Proceedings of The 27th International Conference on Artificial Intelligence and Statistics, PMLR 238:2161-2169, 2024.

Abstract

We propose a Randomized Progressive Training algorithm (RPT)—a stochastic proxy for the well-known Progressive Training method (PT) (Karras et al., 2017). Originally designed to train GANs (Goodfellow et al., 2014), PT was proposed as a heuristic, with no convergence analysis even for the simplest objective functions. On the contrary, to the best of our knowledge, RPT is the first PT-type algorithm with rigorous and sound theoretical guarantees for general smooth objective functions. We cast our method into the established framework of Randomized Coordinate Descent (RCD) (Nesterov, 2012; Richtarik & Takac, 2014), for which (as a by-product of our investigations) we also propose a novel, simple and general convergence analysis encapsulating strongly-convex, convex and nonconvex objectives. We then use this framework to establish a convergence theory for RPT. Finally, we validate the effectiveness of our method through extensive computational experiments.

Cite this Paper


BibTeX
@InProceedings{pmlr-v238-szlendak24a, title = {Understanding Progressive Training Through the Framework of Randomized Coordinate Descent}, author = {Szlendak, Rafa\l{} and Gasanov, Elnur and Richtarik, Peter}, booktitle = {Proceedings of The 27th International Conference on Artificial Intelligence and Statistics}, pages = {2161--2169}, year = {2024}, editor = {Dasgupta, Sanjoy and Mandt, Stephan and Li, Yingzhen}, volume = {238}, series = {Proceedings of Machine Learning Research}, month = {02--04 May}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v238/szlendak24a/szlendak24a.pdf}, url = {https://proceedings.mlr.press/v238/szlendak24a.html}, abstract = {We propose a Randomized Progressive Training algorithm (RPT)—a stochastic proxy for the well-known Progressive Training method (PT) (Karras et al., 2017). Originally designed to train GANs (Goodfellow et al., 2014), PT was proposed as a heuristic, with no convergence analysis even for the simplest objective functions. On the contrary, to the best of our knowledge, RPT is the first PT-type algorithm with rigorous and sound theoretical guarantees for general smooth objective functions. We cast our method into the established framework of Randomized Coordinate Descent (RCD) (Nesterov, 2012; Richtarik & Takac, 2014), for which (as a by-product of our investigations) we also propose a novel, simple and general convergence analysis encapsulating strongly-convex, convex and nonconvex objectives. We then use this framework to establish a convergence theory for RPT. Finally, we validate the effectiveness of our method through extensive computational experiments.} }
Endnote
%0 Conference Paper %T Understanding Progressive Training Through the Framework of Randomized Coordinate Descent %A Rafał Szlendak %A Elnur Gasanov %A Peter Richtarik %B Proceedings of The 27th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2024 %E Sanjoy Dasgupta %E Stephan Mandt %E Yingzhen Li %F pmlr-v238-szlendak24a %I PMLR %P 2161--2169 %U https://proceedings.mlr.press/v238/szlendak24a.html %V 238 %X We propose a Randomized Progressive Training algorithm (RPT)—a stochastic proxy for the well-known Progressive Training method (PT) (Karras et al., 2017). Originally designed to train GANs (Goodfellow et al., 2014), PT was proposed as a heuristic, with no convergence analysis even for the simplest objective functions. On the contrary, to the best of our knowledge, RPT is the first PT-type algorithm with rigorous and sound theoretical guarantees for general smooth objective functions. We cast our method into the established framework of Randomized Coordinate Descent (RCD) (Nesterov, 2012; Richtarik & Takac, 2014), for which (as a by-product of our investigations) we also propose a novel, simple and general convergence analysis encapsulating strongly-convex, convex and nonconvex objectives. We then use this framework to establish a convergence theory for RPT. Finally, we validate the effectiveness of our method through extensive computational experiments.
APA
Szlendak, R., Gasanov, E. & Richtarik, P.. (2024). Understanding Progressive Training Through the Framework of Randomized Coordinate Descent. Proceedings of The 27th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 238:2161-2169 Available from https://proceedings.mlr.press/v238/szlendak24a.html.

Related Material