A Novel Stochastic Gradient Descent Algorithm for Learning Principal Subspaces

Charline Le Lan, Joshua Greaves, Jesse Farebrother, Mark Rowland, Fabian Pedregosa, Rishabh Agarwal, Marc G. Bellemare
Proceedings of The 26th International Conference on Artificial Intelligence and Statistics, PMLR 206:1703-1718, 2023.

Abstract

Many machine learning problems encode their data as a matrix with a possibly very large number of rows and columns. In several applications like neuroscience, image compression or deep reinforcement learning, the principal subspace of such a matrix provides a useful, low-dimensional representation of individual data. Here, we are interested in determining the $d$-dimensional principal subspace of a given matrix from sample entries, i.e. from small random submatrices. Although a number of sample-based methods exist for this problem (e.g. Oja’s rule (Oja, 1982)), these assume access to full columns of the matrix or particular matrix structure such as symmetry and cannot be combined as-is with neural networks (Baldi et al., 1989). In this paper, we derive an algorithm that learns a principal subspace from sample entries, can be applied when the approximate subspace is represented by a neural network, and hence can be scaled to datasets with an effectively infinite number of rows and columns. Our method consists in defining a loss function whose minimizer is the desired principal subspace, and constructing a gradient estimate of this loss whose bias can be controlled. We complement our theoretical analysis with a series of experiments on synthetic matrices, the MNIST dataset (LeCun et al. 2010) and the reinforcement learning domain PuddleWorld (Sutton, 1995) demonstrating the usefulness of our approach.

Cite this Paper


BibTeX
@InProceedings{pmlr-v206-lan23a, title = {A Novel Stochastic Gradient Descent Algorithm for Learning Principal Subspaces}, author = {Lan, Charline Le and Greaves, Joshua and Farebrother, Jesse and Rowland, Mark and Pedregosa, Fabian and Agarwal, Rishabh and Bellemare, Marc G.}, booktitle = {Proceedings of The 26th International Conference on Artificial Intelligence and Statistics}, pages = {1703--1718}, year = {2023}, editor = {Ruiz, Francisco and Dy, Jennifer and van de Meent, Jan-Willem}, volume = {206}, series = {Proceedings of Machine Learning Research}, month = {25--27 Apr}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v206/lan23a/lan23a.pdf}, url = {https://proceedings.mlr.press/v206/lan23a.html}, abstract = {Many machine learning problems encode their data as a matrix with a possibly very large number of rows and columns. In several applications like neuroscience, image compression or deep reinforcement learning, the principal subspace of such a matrix provides a useful, low-dimensional representation of individual data. Here, we are interested in determining the $d$-dimensional principal subspace of a given matrix from sample entries, i.e. from small random submatrices. Although a number of sample-based methods exist for this problem (e.g. Oja’s rule (Oja, 1982)), these assume access to full columns of the matrix or particular matrix structure such as symmetry and cannot be combined as-is with neural networks (Baldi et al., 1989). In this paper, we derive an algorithm that learns a principal subspace from sample entries, can be applied when the approximate subspace is represented by a neural network, and hence can be scaled to datasets with an effectively infinite number of rows and columns. Our method consists in defining a loss function whose minimizer is the desired principal subspace, and constructing a gradient estimate of this loss whose bias can be controlled. We complement our theoretical analysis with a series of experiments on synthetic matrices, the MNIST dataset (LeCun et al. 2010) and the reinforcement learning domain PuddleWorld (Sutton, 1995) demonstrating the usefulness of our approach.} }
Endnote
%0 Conference Paper %T A Novel Stochastic Gradient Descent Algorithm for Learning Principal Subspaces %A Charline Le Lan %A Joshua Greaves %A Jesse Farebrother %A Mark Rowland %A Fabian Pedregosa %A Rishabh Agarwal %A Marc G. Bellemare %B Proceedings of The 26th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2023 %E Francisco Ruiz %E Jennifer Dy %E Jan-Willem van de Meent %F pmlr-v206-lan23a %I PMLR %P 1703--1718 %U https://proceedings.mlr.press/v206/lan23a.html %V 206 %X Many machine learning problems encode their data as a matrix with a possibly very large number of rows and columns. In several applications like neuroscience, image compression or deep reinforcement learning, the principal subspace of such a matrix provides a useful, low-dimensional representation of individual data. Here, we are interested in determining the $d$-dimensional principal subspace of a given matrix from sample entries, i.e. from small random submatrices. Although a number of sample-based methods exist for this problem (e.g. Oja’s rule (Oja, 1982)), these assume access to full columns of the matrix or particular matrix structure such as symmetry and cannot be combined as-is with neural networks (Baldi et al., 1989). In this paper, we derive an algorithm that learns a principal subspace from sample entries, can be applied when the approximate subspace is represented by a neural network, and hence can be scaled to datasets with an effectively infinite number of rows and columns. Our method consists in defining a loss function whose minimizer is the desired principal subspace, and constructing a gradient estimate of this loss whose bias can be controlled. We complement our theoretical analysis with a series of experiments on synthetic matrices, the MNIST dataset (LeCun et al. 2010) and the reinforcement learning domain PuddleWorld (Sutton, 1995) demonstrating the usefulness of our approach.
APA
Lan, C.L., Greaves, J., Farebrother, J., Rowland, M., Pedregosa, F., Agarwal, R. & Bellemare, M.G.. (2023). A Novel Stochastic Gradient Descent Algorithm for Learning Principal Subspaces. Proceedings of The 26th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 206:1703-1718 Available from https://proceedings.mlr.press/v206/lan23a.html.

Related Material