Randomized Block-Diagonal Preconditioning for Parallel Learning

Celestine Mendler-Dünner, Aurelien Lucchi
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:6841-6851, 2020.

Abstract

We study preconditioned gradient-based optimization methods where the preconditioning matrix has block-diagonal form. Such a structural constraint comes with the advantage that the update computation can be parallelized across multiple independent tasks. Our main contribution is to demonstrate that the convergence of these methods can significantly be improved by a randomization technique which corresponds to repartitioning coordinates across tasks during the optimization procedure. We provide a theoretical analysis that accurately characterizes the expected convergence gains of repartitioning and validate our findings empirically on various traditional machine learning tasks. From an implementation perspective, block-separable models are well suited for parallelization and, when shared memory is available, randomization can be implemented on top of existing methods very efficiently to improve convergence.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-mendler-dunner20a, title = {Randomized Block-Diagonal Preconditioning for Parallel Learning}, author = {Mendler-D{\"u}nner, Celestine and Lucchi, Aurelien}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {6841--6851}, year = {2020}, editor = {III, Hal Daumé and Singh, Aarti}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/mendler-dunner20a/mendler-dunner20a.pdf}, url = {https://proceedings.mlr.press/v119/mendler-dunner20a.html}, abstract = {We study preconditioned gradient-based optimization methods where the preconditioning matrix has block-diagonal form. Such a structural constraint comes with the advantage that the update computation can be parallelized across multiple independent tasks. Our main contribution is to demonstrate that the convergence of these methods can significantly be improved by a randomization technique which corresponds to repartitioning coordinates across tasks during the optimization procedure. We provide a theoretical analysis that accurately characterizes the expected convergence gains of repartitioning and validate our findings empirically on various traditional machine learning tasks. From an implementation perspective, block-separable models are well suited for parallelization and, when shared memory is available, randomization can be implemented on top of existing methods very efficiently to improve convergence.} }
Endnote
%0 Conference Paper %T Randomized Block-Diagonal Preconditioning for Parallel Learning %A Celestine Mendler-Dünner %A Aurelien Lucchi %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-mendler-dunner20a %I PMLR %P 6841--6851 %U https://proceedings.mlr.press/v119/mendler-dunner20a.html %V 119 %X We study preconditioned gradient-based optimization methods where the preconditioning matrix has block-diagonal form. Such a structural constraint comes with the advantage that the update computation can be parallelized across multiple independent tasks. Our main contribution is to demonstrate that the convergence of these methods can significantly be improved by a randomization technique which corresponds to repartitioning coordinates across tasks during the optimization procedure. We provide a theoretical analysis that accurately characterizes the expected convergence gains of repartitioning and validate our findings empirically on various traditional machine learning tasks. From an implementation perspective, block-separable models are well suited for parallelization and, when shared memory is available, randomization can be implemented on top of existing methods very efficiently to improve convergence.
APA
Mendler-Dünner, C. & Lucchi, A.. (2020). Randomized Block-Diagonal Preconditioning for Parallel Learning. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:6841-6851 Available from https://proceedings.mlr.press/v119/mendler-dunner20a.html.

Related Material