Block Subsampled Randomized Hadamard Transform for Nyström Approximation on Distributed Architectures

Oleg Balabanov, Matthias Beaupère, Laura Grigori, Victor Lederer
Proceedings of the 40th International Conference on Machine Learning, PMLR 202:1564-1576, 2023.

Abstract

This article introduces a novel structured random matrix composed blockwise from subsampled randomized Hadamard transforms (SRHTs). The block SRHT is expected to outperform well-known dimension reduction maps, including SRHT and Gaussian matrices on distributed architectures. We prove that a block SRHT with enough rows is an oblivious subspace embedding, i.e., an approximate isometry for an arbitrary low-dimensional subspace with high probability. Our estimate of the required number of rows is similar to that of the standard SRHT. This suggests that the two transforms should provide the same accuracy of approximation in the algorithms. The block SRHT can be readily incorporated into randomized methods for computing a low-rank approximation of a large-scale matrix, such as the Nyström method. For completeness, we revisit this method with a discussion of its implementation on distributed architectures.

Cite this Paper


BibTeX
@InProceedings{pmlr-v202-balabanov23a, title = {Block Subsampled Randomized Hadamard Transform for Nyström Approximation on Distributed Architectures}, author = {Balabanov, Oleg and Beaup\`{e}re, Matthias and Grigori, Laura and Lederer, Victor}, booktitle = {Proceedings of the 40th International Conference on Machine Learning}, pages = {1564--1576}, year = {2023}, editor = {Krause, Andreas and Brunskill, Emma and Cho, Kyunghyun and Engelhardt, Barbara and Sabato, Sivan and Scarlett, Jonathan}, volume = {202}, series = {Proceedings of Machine Learning Research}, month = {23--29 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v202/balabanov23a/balabanov23a.pdf}, url = {https://proceedings.mlr.press/v202/balabanov23a.html}, abstract = {This article introduces a novel structured random matrix composed blockwise from subsampled randomized Hadamard transforms (SRHTs). The block SRHT is expected to outperform well-known dimension reduction maps, including SRHT and Gaussian matrices on distributed architectures. We prove that a block SRHT with enough rows is an oblivious subspace embedding, i.e., an approximate isometry for an arbitrary low-dimensional subspace with high probability. Our estimate of the required number of rows is similar to that of the standard SRHT. This suggests that the two transforms should provide the same accuracy of approximation in the algorithms. The block SRHT can be readily incorporated into randomized methods for computing a low-rank approximation of a large-scale matrix, such as the Nyström method. For completeness, we revisit this method with a discussion of its implementation on distributed architectures.} }
Endnote
%0 Conference Paper %T Block Subsampled Randomized Hadamard Transform for Nyström Approximation on Distributed Architectures %A Oleg Balabanov %A Matthias Beaupère %A Laura Grigori %A Victor Lederer %B Proceedings of the 40th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2023 %E Andreas Krause %E Emma Brunskill %E Kyunghyun Cho %E Barbara Engelhardt %E Sivan Sabato %E Jonathan Scarlett %F pmlr-v202-balabanov23a %I PMLR %P 1564--1576 %U https://proceedings.mlr.press/v202/balabanov23a.html %V 202 %X This article introduces a novel structured random matrix composed blockwise from subsampled randomized Hadamard transforms (SRHTs). The block SRHT is expected to outperform well-known dimension reduction maps, including SRHT and Gaussian matrices on distributed architectures. We prove that a block SRHT with enough rows is an oblivious subspace embedding, i.e., an approximate isometry for an arbitrary low-dimensional subspace with high probability. Our estimate of the required number of rows is similar to that of the standard SRHT. This suggests that the two transforms should provide the same accuracy of approximation in the algorithms. The block SRHT can be readily incorporated into randomized methods for computing a low-rank approximation of a large-scale matrix, such as the Nyström method. For completeness, we revisit this method with a discussion of its implementation on distributed architectures.
APA
Balabanov, O., Beaupère, M., Grigori, L. & Lederer, V.. (2023). Block Subsampled Randomized Hadamard Transform for Nyström Approximation on Distributed Architectures. Proceedings of the 40th International Conference on Machine Learning, in Proceedings of Machine Learning Research 202:1564-1576 Available from https://proceedings.mlr.press/v202/balabanov23a.html.

Related Material