Improving Optimization for Models With Continuous Symmetry Breaking

Robert Bamler, Stephan Mandt
Proceedings of the 35th International Conference on Machine Learning, PMLR 80:423-432, 2018.

Abstract

Many loss functions in representation learning are invariant under a continuous symmetry transformation. For example, the loss function of word embeddings (Mikolov et al., 2013) remains unchanged if we simultaneously rotate all word and context embedding vectors. We show that representation learning models for time series possess an approximate continuous symmetry that leads to slow convergence of gradient descent. We propose a new optimization algorithm that speeds up convergence using ideas from gauge theory in physics. Our algorithm leads to orders of magnitude faster convergence and to more interpretable representations, as we show for dynamic extensions of matrix factorization and word embedding models. We further present an example application of our proposed algorithm that translates modern words into their historic equivalents.

Cite this Paper


BibTeX
@InProceedings{pmlr-v80-bamler18a, title = {Improving Optimization for Models With Continuous Symmetry Breaking}, author = {Bamler, Robert and Mandt, Stephan}, booktitle = {Proceedings of the 35th International Conference on Machine Learning}, pages = {423--432}, year = {2018}, editor = {Dy, Jennifer and Krause, Andreas}, volume = {80}, series = {Proceedings of Machine Learning Research}, month = {10--15 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v80/bamler18a/bamler18a.pdf}, url = {https://proceedings.mlr.press/v80/bamler18a.html}, abstract = {Many loss functions in representation learning are invariant under a continuous symmetry transformation. For example, the loss function of word embeddings (Mikolov et al., 2013) remains unchanged if we simultaneously rotate all word and context embedding vectors. We show that representation learning models for time series possess an approximate continuous symmetry that leads to slow convergence of gradient descent. We propose a new optimization algorithm that speeds up convergence using ideas from gauge theory in physics. Our algorithm leads to orders of magnitude faster convergence and to more interpretable representations, as we show for dynamic extensions of matrix factorization and word embedding models. We further present an example application of our proposed algorithm that translates modern words into their historic equivalents.} }
Endnote
%0 Conference Paper %T Improving Optimization for Models With Continuous Symmetry Breaking %A Robert Bamler %A Stephan Mandt %B Proceedings of the 35th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2018 %E Jennifer Dy %E Andreas Krause %F pmlr-v80-bamler18a %I PMLR %P 423--432 %U https://proceedings.mlr.press/v80/bamler18a.html %V 80 %X Many loss functions in representation learning are invariant under a continuous symmetry transformation. For example, the loss function of word embeddings (Mikolov et al., 2013) remains unchanged if we simultaneously rotate all word and context embedding vectors. We show that representation learning models for time series possess an approximate continuous symmetry that leads to slow convergence of gradient descent. We propose a new optimization algorithm that speeds up convergence using ideas from gauge theory in physics. Our algorithm leads to orders of magnitude faster convergence and to more interpretable representations, as we show for dynamic extensions of matrix factorization and word embedding models. We further present an example application of our proposed algorithm that translates modern words into their historic equivalents.
APA
Bamler, R. & Mandt, S.. (2018). Improving Optimization for Models With Continuous Symmetry Breaking. Proceedings of the 35th International Conference on Machine Learning, in Proceedings of Machine Learning Research 80:423-432 Available from https://proceedings.mlr.press/v80/bamler18a.html.

Related Material