Deep Self-expressive Learning

Chen Zhao, Chun-Guang Li, Wei He, Chong You
Conference on Parsimony and Learning, PMLR 234:228-247, 2024.

Abstract

Self-expressive model is a method for clustering data drawn from a union of low-dimensional linear subspaces. It gains a lot of popularity due to its: 1) simplicity, based on the observation that each data point can be expressed as a linear combination of the other data points, 2) provable correctness under broad geometric and statistical conditions, and 3) many extensions for handling corrupted, imbalanced, and large-scale real data. This paper extends the self-expressive model to a Deep sELf-expressiVE model (DELVE) for handling the more challenging case that the data lie in a union of nonlinear manifolds. DELVE is constructed from stacking self-expressive layers, each of which maps each data point to a linear combination of the other data points, and can be trained via minimizing self-expressive losses. With such a design, the operator, architecture, and training of DELVE have the explicit interpretation of producing progressively linearized representations from the input data in nonlinear manifolds. Moreover, by leveraging existing understanding and techniques for self-expressive models, DELVE has a collection of benefits such as design choice by principles, robustness via specialized layers, and efficiency via specialized optimizers. We demonstrate on image datasets that DELVE can effectively perform data clustering, remove data corruptions, and handle large scale data.

Cite this Paper


BibTeX
@InProceedings{pmlr-v234-zhao24a, title = {Deep Self-expressive Learning}, author = {Zhao, Chen and Li, Chun-Guang and He, Wei and You, Chong}, booktitle = {Conference on Parsimony and Learning}, pages = {228--247}, year = {2024}, editor = {Chi, Yuejie and Dziugaite, Gintare Karolina and Qu, Qing and Wang, Atlas Wang and Zhu, Zhihui}, volume = {234}, series = {Proceedings of Machine Learning Research}, month = {03--06 Jan}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v234/zhao24a/zhao24a.pdf}, url = {https://proceedings.mlr.press/v234/zhao24a.html}, abstract = {Self-expressive model is a method for clustering data drawn from a union of low-dimensional linear subspaces. It gains a lot of popularity due to its: 1) simplicity, based on the observation that each data point can be expressed as a linear combination of the other data points, 2) provable correctness under broad geometric and statistical conditions, and 3) many extensions for handling corrupted, imbalanced, and large-scale real data. This paper extends the self-expressive model to a Deep sELf-expressiVE model (DELVE) for handling the more challenging case that the data lie in a union of nonlinear manifolds. DELVE is constructed from stacking self-expressive layers, each of which maps each data point to a linear combination of the other data points, and can be trained via minimizing self-expressive losses. With such a design, the operator, architecture, and training of DELVE have the explicit interpretation of producing progressively linearized representations from the input data in nonlinear manifolds. Moreover, by leveraging existing understanding and techniques for self-expressive models, DELVE has a collection of benefits such as design choice by principles, robustness via specialized layers, and efficiency via specialized optimizers. We demonstrate on image datasets that DELVE can effectively perform data clustering, remove data corruptions, and handle large scale data.} }
Endnote
%0 Conference Paper %T Deep Self-expressive Learning %A Chen Zhao %A Chun-Guang Li %A Wei He %A Chong You %B Conference on Parsimony and Learning %C Proceedings of Machine Learning Research %D 2024 %E Yuejie Chi %E Gintare Karolina Dziugaite %E Qing Qu %E Atlas Wang Wang %E Zhihui Zhu %F pmlr-v234-zhao24a %I PMLR %P 228--247 %U https://proceedings.mlr.press/v234/zhao24a.html %V 234 %X Self-expressive model is a method for clustering data drawn from a union of low-dimensional linear subspaces. It gains a lot of popularity due to its: 1) simplicity, based on the observation that each data point can be expressed as a linear combination of the other data points, 2) provable correctness under broad geometric and statistical conditions, and 3) many extensions for handling corrupted, imbalanced, and large-scale real data. This paper extends the self-expressive model to a Deep sELf-expressiVE model (DELVE) for handling the more challenging case that the data lie in a union of nonlinear manifolds. DELVE is constructed from stacking self-expressive layers, each of which maps each data point to a linear combination of the other data points, and can be trained via minimizing self-expressive losses. With such a design, the operator, architecture, and training of DELVE have the explicit interpretation of producing progressively linearized representations from the input data in nonlinear manifolds. Moreover, by leveraging existing understanding and techniques for self-expressive models, DELVE has a collection of benefits such as design choice by principles, robustness via specialized layers, and efficiency via specialized optimizers. We demonstrate on image datasets that DELVE can effectively perform data clustering, remove data corruptions, and handle large scale data.
APA
Zhao, C., Li, C., He, W. & You, C.. (2024). Deep Self-expressive Learning. Conference on Parsimony and Learning, in Proceedings of Machine Learning Research 234:228-247 Available from https://proceedings.mlr.press/v234/zhao24a.html.

Related Material