Fast algorithm for overcomplete order-3 tensor decomposition

Jingqiu Ding, Tommaso d’Orsi, Chih-Hung Liu, David Steurer, Stefan Tiegel
Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:3741-3799, 2022.

Abstract

We develop the first fast spectral algorithm to decompose a random third-order tensor over of rank up to O(d3/2/polylog(d)). Our algorithm only involves simple linear algebra operations and can recover all components in time O(d6.05) under the current matrix multiplication time. Prior to this work, comparable guarantees could only be achieved via sum-of-squares [Ma, Shi, Steurer 2016]. In contrast, fast algorithms [Hopkins, Schramm, Shi, Steurer 2016] could only decompose tensors of rank at most O(d4/3/polylog(d)). Our algorithmic result rests on two key ingredients. A clean lifting of the third-order tensor to a sixth-order tensor, which can be expressed in the language of tensor networks. A careful decomposition of the tensor network into a sequence of rectangular matrix multiplications, which allows us to have a fast implementation of the algorithm.

Cite this Paper


BibTeX
@InProceedings{pmlr-v178-ding22a, title = {Fast algorithm for overcomplete order-3 tensor decomposition}, author = {Ding, Jingqiu and d'Orsi, Tommaso and Liu, Chih-Hung and Steurer, David and Tiegel, Stefan}, booktitle = {Proceedings of Thirty Fifth Conference on Learning Theory}, pages = {3741--3799}, year = {2022}, editor = {Loh, Po-Ling and Raginsky, Maxim}, volume = {178}, series = {Proceedings of Machine Learning Research}, month = {02--05 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v178/ding22a/ding22a.pdf}, url = {https://proceedings.mlr.press/v178/ding22a.html}, abstract = {We develop the first fast spectral algorithm to decompose a random third-order tensor over of rank up to $$O(d^{3/2}/polylog(d))$$. Our algorithm only involves simple linear algebra operations and can recover all components in time $$O(d^{6.05})$$ under the current matrix multiplication time. Prior to this work, comparable guarantees could only be achieved via sum-of-squares [Ma, Shi, Steurer 2016]. In contrast, fast algorithms [Hopkins, Schramm, Shi, Steurer 2016] could only decompose tensors of rank at most $$O(d^{4/3}/polylog(d))$$. Our algorithmic result rests on two key ingredients. A clean lifting of the third-order tensor to a sixth-order tensor, which can be expressed in the language of tensor networks. A careful decomposition of the tensor network into a sequence of rectangular matrix multiplications, which allows us to have a fast implementation of the algorithm.} }
Endnote
%0 Conference Paper %T Fast algorithm for overcomplete order-3 tensor decomposition %A Jingqiu Ding %A Tommaso d’Orsi %A Chih-Hung Liu %A David Steurer %A Stefan Tiegel %B Proceedings of Thirty Fifth Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2022 %E Po-Ling Loh %E Maxim Raginsky %F pmlr-v178-ding22a %I PMLR %P 3741--3799 %U https://proceedings.mlr.press/v178/ding22a.html %V 178 %X We develop the first fast spectral algorithm to decompose a random third-order tensor over of rank up to $$O(d^{3/2}/polylog(d))$$. Our algorithm only involves simple linear algebra operations and can recover all components in time $$O(d^{6.05})$$ under the current matrix multiplication time. Prior to this work, comparable guarantees could only be achieved via sum-of-squares [Ma, Shi, Steurer 2016]. In contrast, fast algorithms [Hopkins, Schramm, Shi, Steurer 2016] could only decompose tensors of rank at most $$O(d^{4/3}/polylog(d))$$. Our algorithmic result rests on two key ingredients. A clean lifting of the third-order tensor to a sixth-order tensor, which can be expressed in the language of tensor networks. A careful decomposition of the tensor network into a sequence of rectangular matrix multiplications, which allows us to have a fast implementation of the algorithm.
APA
Ding, J., d’Orsi, T., Liu, C., Steurer, D. & Tiegel, S.. (2022). Fast algorithm for overcomplete order-3 tensor decomposition. Proceedings of Thirty Fifth Conference on Learning Theory, in Proceedings of Machine Learning Research 178:3741-3799 Available from https://proceedings.mlr.press/v178/ding22a.html.

Related Material