Scalable and Sound Low-Rank Tensor Learning

Hao Cheng, Yaoliang Yu, Xinhua Zhang, Eric Xing, Dale Schuurmans
Proceedings of the 19th International Conference on Artificial Intelligence and Statistics, PMLR 51:1114-1123, 2016.

Abstract

Many real-world data arise naturally as tensors. Equipped with a low rank prior, learning algorithms can benefit from exploiting the rich dependency encoded in a tensor. Despite its prevalence in low-rank matrix learning, trace norm ceases to be tractable in tensors and therefore most existing works resort to matrix unfolding. Although some theoretical guarantees are available, these approaches may lose valuable structure information and are not scalable in general. To address this problem, we propose directly optimizing the tensor trace norm by approximating its dual spectral norm, and we show that the approximation bounds can be efficiently converted to the original problem via the generalized conditional gradient algorithm. The resulting approach is scalable to large datasets, and matches state-of-the-art recovery guarantees. Experimental results on tensor completion and multitask learning confirm the superiority of the proposed method.

Cite this Paper


BibTeX
@InProceedings{pmlr-v51-cheng16, title = {Scalable and Sound Low-Rank Tensor Learning}, author = {Cheng, Hao and Yu, Yaoliang and Zhang, Xinhua and Xing, Eric and Schuurmans, Dale}, booktitle = {Proceedings of the 19th International Conference on Artificial Intelligence and Statistics}, pages = {1114--1123}, year = {2016}, editor = {Gretton, Arthur and Robert, Christian C.}, volume = {51}, series = {Proceedings of Machine Learning Research}, address = {Cadiz, Spain}, month = {09--11 May}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v51/cheng16.pdf}, url = {https://proceedings.mlr.press/v51/cheng16.html}, abstract = {Many real-world data arise naturally as tensors. Equipped with a low rank prior, learning algorithms can benefit from exploiting the rich dependency encoded in a tensor. Despite its prevalence in low-rank matrix learning, trace norm ceases to be tractable in tensors and therefore most existing works resort to matrix unfolding. Although some theoretical guarantees are available, these approaches may lose valuable structure information and are not scalable in general. To address this problem, we propose directly optimizing the tensor trace norm by approximating its dual spectral norm, and we show that the approximation bounds can be efficiently converted to the original problem via the generalized conditional gradient algorithm. The resulting approach is scalable to large datasets, and matches state-of-the-art recovery guarantees. Experimental results on tensor completion and multitask learning confirm the superiority of the proposed method.} }
Endnote
%0 Conference Paper %T Scalable and Sound Low-Rank Tensor Learning %A Hao Cheng %A Yaoliang Yu %A Xinhua Zhang %A Eric Xing %A Dale Schuurmans %B Proceedings of the 19th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2016 %E Arthur Gretton %E Christian C. Robert %F pmlr-v51-cheng16 %I PMLR %P 1114--1123 %U https://proceedings.mlr.press/v51/cheng16.html %V 51 %X Many real-world data arise naturally as tensors. Equipped with a low rank prior, learning algorithms can benefit from exploiting the rich dependency encoded in a tensor. Despite its prevalence in low-rank matrix learning, trace norm ceases to be tractable in tensors and therefore most existing works resort to matrix unfolding. Although some theoretical guarantees are available, these approaches may lose valuable structure information and are not scalable in general. To address this problem, we propose directly optimizing the tensor trace norm by approximating its dual spectral norm, and we show that the approximation bounds can be efficiently converted to the original problem via the generalized conditional gradient algorithm. The resulting approach is scalable to large datasets, and matches state-of-the-art recovery guarantees. Experimental results on tensor completion and multitask learning confirm the superiority of the proposed method.
RIS
TY - CPAPER TI - Scalable and Sound Low-Rank Tensor Learning AU - Hao Cheng AU - Yaoliang Yu AU - Xinhua Zhang AU - Eric Xing AU - Dale Schuurmans BT - Proceedings of the 19th International Conference on Artificial Intelligence and Statistics DA - 2016/05/02 ED - Arthur Gretton ED - Christian C. Robert ID - pmlr-v51-cheng16 PB - PMLR DP - Proceedings of Machine Learning Research VL - 51 SP - 1114 EP - 1123 L1 - http://proceedings.mlr.press/v51/cheng16.pdf UR - https://proceedings.mlr.press/v51/cheng16.html AB - Many real-world data arise naturally as tensors. Equipped with a low rank prior, learning algorithms can benefit from exploiting the rich dependency encoded in a tensor. Despite its prevalence in low-rank matrix learning, trace norm ceases to be tractable in tensors and therefore most existing works resort to matrix unfolding. Although some theoretical guarantees are available, these approaches may lose valuable structure information and are not scalable in general. To address this problem, we propose directly optimizing the tensor trace norm by approximating its dual spectral norm, and we show that the approximation bounds can be efficiently converted to the original problem via the generalized conditional gradient algorithm. The resulting approach is scalable to large datasets, and matches state-of-the-art recovery guarantees. Experimental results on tensor completion and multitask learning confirm the superiority of the proposed method. ER -
APA
Cheng, H., Yu, Y., Zhang, X., Xing, E. & Schuurmans, D.. (2016). Scalable and Sound Low-Rank Tensor Learning. Proceedings of the 19th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 51:1114-1123 Available from https://proceedings.mlr.press/v51/cheng16.html.

Related Material