CWY Parametrization: a Solution for Parallelized Optimization of Orthogonal and Stiefel Matrices

Valerii Likhosherstov, Jared Davis, Krzysztof Choromanski, Adrian Weller
Proceedings of The 24th International Conference on Artificial Intelligence and Statistics, PMLR 130:55-63, 2021.

Abstract

We introduce an efficient approach for optimization over orthogonal groups on highly parallel computation units such as GPUs or TPUs. As in earlier work, we parametrize an orthogonal matrix as a product of Householder reflections. However, to overcome low parallelization capabilities of computing Householder reflections sequentially, we propose employing an accumulation scheme called the compact WY (or CWY) transform – a compact parallelization-friendly matrix representation for the series of Householder reflections. We further develop a novel Truncated CWY (or T-CWY) approach for Stiefel manifold parametrization which has a competitive complexity and, again, yields benefits when computed on GPUs and TPUs. We prove that our CWY and T-CWY methods lead to convergence to a stationary point of the training objective when coupled with stochastic gradient descent. We apply our methods to train recurrent neural network architectures in the tasks of neural machine translation and video prediction.

Cite this Paper


BibTeX
@InProceedings{pmlr-v130-likhosherstov21a, title = { CWY Parametrization: a Solution for Parallelized Optimization of Orthogonal and Stiefel Matrices }, author = {Likhosherstov, Valerii and Davis, Jared and Choromanski, Krzysztof and Weller, Adrian}, booktitle = {Proceedings of The 24th International Conference on Artificial Intelligence and Statistics}, pages = {55--63}, year = {2021}, editor = {Banerjee, Arindam and Fukumizu, Kenji}, volume = {130}, series = {Proceedings of Machine Learning Research}, month = {13--15 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v130/likhosherstov21a/likhosherstov21a.pdf}, url = {https://proceedings.mlr.press/v130/likhosherstov21a.html}, abstract = { We introduce an efficient approach for optimization over orthogonal groups on highly parallel computation units such as GPUs or TPUs. As in earlier work, we parametrize an orthogonal matrix as a product of Householder reflections. However, to overcome low parallelization capabilities of computing Householder reflections sequentially, we propose employing an accumulation scheme called the compact WY (or CWY) transform – a compact parallelization-friendly matrix representation for the series of Householder reflections. We further develop a novel Truncated CWY (or T-CWY) approach for Stiefel manifold parametrization which has a competitive complexity and, again, yields benefits when computed on GPUs and TPUs. We prove that our CWY and T-CWY methods lead to convergence to a stationary point of the training objective when coupled with stochastic gradient descent. We apply our methods to train recurrent neural network architectures in the tasks of neural machine translation and video prediction. } }
Endnote
%0 Conference Paper %T CWY Parametrization: a Solution for Parallelized Optimization of Orthogonal and Stiefel Matrices %A Valerii Likhosherstov %A Jared Davis %A Krzysztof Choromanski %A Adrian Weller %B Proceedings of The 24th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2021 %E Arindam Banerjee %E Kenji Fukumizu %F pmlr-v130-likhosherstov21a %I PMLR %P 55--63 %U https://proceedings.mlr.press/v130/likhosherstov21a.html %V 130 %X We introduce an efficient approach for optimization over orthogonal groups on highly parallel computation units such as GPUs or TPUs. As in earlier work, we parametrize an orthogonal matrix as a product of Householder reflections. However, to overcome low parallelization capabilities of computing Householder reflections sequentially, we propose employing an accumulation scheme called the compact WY (or CWY) transform – a compact parallelization-friendly matrix representation for the series of Householder reflections. We further develop a novel Truncated CWY (or T-CWY) approach for Stiefel manifold parametrization which has a competitive complexity and, again, yields benefits when computed on GPUs and TPUs. We prove that our CWY and T-CWY methods lead to convergence to a stationary point of the training objective when coupled with stochastic gradient descent. We apply our methods to train recurrent neural network architectures in the tasks of neural machine translation and video prediction.
APA
Likhosherstov, V., Davis, J., Choromanski, K. & Weller, A.. (2021). CWY Parametrization: a Solution for Parallelized Optimization of Orthogonal and Stiefel Matrices . Proceedings of The 24th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 130:55-63 Available from https://proceedings.mlr.press/v130/likhosherstov21a.html.

Related Material