Fast and Provable Nonconvex Tensor RPCA

Haiquan Qiu, Yao Wang, Shaojie Tang, Deyu Meng, Quanming Yao
Proceedings of the 39th International Conference on Machine Learning, PMLR 162:18211-18249, 2022.

Abstract

In this paper, we study nonconvex tensor robust principal component analysis (RPCA) based on the $t$-SVD. We first propose an alternating projection method, i.e., APT, which converges linearly to the ground-truth under the incoherence conditions of tensors. However, as the projection to the low-rank tensor space in APT can be slow, we further propose to speedup such a process by utilizing the property of the tangent space of low-rank. The resulting algorithm, i.e., EAPT, is not only more efficient than APT but also keeps the linear convergence. Compared with existing tensor RPCA works, the proposed method, especially EAPT, is not only more effective due to the recovery guarantee and adaption in the transformed (frequency) domain but also more efficient due to faster convergence rate and lower iteration complexity. These benefits are also empirically verified both on synthetic data, and real applications, e.g., hyperspectral image denoising and video background subtraction.

Cite this Paper


BibTeX
@InProceedings{pmlr-v162-qiu22d, title = {Fast and Provable Nonconvex Tensor {RPCA}}, author = {Qiu, Haiquan and Wang, Yao and Tang, Shaojie and Meng, Deyu and Yao, Quanming}, booktitle = {Proceedings of the 39th International Conference on Machine Learning}, pages = {18211--18249}, year = {2022}, editor = {Chaudhuri, Kamalika and Jegelka, Stefanie and Song, Le and Szepesvari, Csaba and Niu, Gang and Sabato, Sivan}, volume = {162}, series = {Proceedings of Machine Learning Research}, month = {17--23 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v162/qiu22d/qiu22d.pdf}, url = {https://proceedings.mlr.press/v162/qiu22d.html}, abstract = {In this paper, we study nonconvex tensor robust principal component analysis (RPCA) based on the $t$-SVD. We first propose an alternating projection method, i.e., APT, which converges linearly to the ground-truth under the incoherence conditions of tensors. However, as the projection to the low-rank tensor space in APT can be slow, we further propose to speedup such a process by utilizing the property of the tangent space of low-rank. The resulting algorithm, i.e., EAPT, is not only more efficient than APT but also keeps the linear convergence. Compared with existing tensor RPCA works, the proposed method, especially EAPT, is not only more effective due to the recovery guarantee and adaption in the transformed (frequency) domain but also more efficient due to faster convergence rate and lower iteration complexity. These benefits are also empirically verified both on synthetic data, and real applications, e.g., hyperspectral image denoising and video background subtraction.} }
Endnote
%0 Conference Paper %T Fast and Provable Nonconvex Tensor RPCA %A Haiquan Qiu %A Yao Wang %A Shaojie Tang %A Deyu Meng %A Quanming Yao %B Proceedings of the 39th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2022 %E Kamalika Chaudhuri %E Stefanie Jegelka %E Le Song %E Csaba Szepesvari %E Gang Niu %E Sivan Sabato %F pmlr-v162-qiu22d %I PMLR %P 18211--18249 %U https://proceedings.mlr.press/v162/qiu22d.html %V 162 %X In this paper, we study nonconvex tensor robust principal component analysis (RPCA) based on the $t$-SVD. We first propose an alternating projection method, i.e., APT, which converges linearly to the ground-truth under the incoherence conditions of tensors. However, as the projection to the low-rank tensor space in APT can be slow, we further propose to speedup such a process by utilizing the property of the tangent space of low-rank. The resulting algorithm, i.e., EAPT, is not only more efficient than APT but also keeps the linear convergence. Compared with existing tensor RPCA works, the proposed method, especially EAPT, is not only more effective due to the recovery guarantee and adaption in the transformed (frequency) domain but also more efficient due to faster convergence rate and lower iteration complexity. These benefits are also empirically verified both on synthetic data, and real applications, e.g., hyperspectral image denoising and video background subtraction.
APA
Qiu, H., Wang, Y., Tang, S., Meng, D. & Yao, Q.. (2022). Fast and Provable Nonconvex Tensor RPCA. Proceedings of the 39th International Conference on Machine Learning, in Proceedings of Machine Learning Research 162:18211-18249 Available from https://proceedings.mlr.press/v162/qiu22d.html.

Related Material