Extending Kernel PCA through Dualization: Sparsity, Robustness and Fast Algorithms

Francesco Tonin, Alex Lambert, Panagiotis Patrinos, Johan Suykens
Proceedings of the 40th International Conference on Machine Learning, PMLR 202:34379-34393, 2023.

Abstract

The goal of this paper is to revisit Kernel Principal Component Analysis (KPCA) through dualization of a difference of convex functions. This allows to naturally extend KPCA to multiple objective functions and leads to efficient gradient-based algorithms avoiding the expensive SVD of the Gram matrix. Particularly, we consider objective functions that can be written as Moreau envelopes, demonstrating how to promote robustness and sparsity within the same framework. The proposed method is evaluated on synthetic and realworld benchmarks, showing significant speedup in KPCA training time as well as highlighting the benefits in terms of robustness and sparsity.

Cite this Paper


BibTeX
@InProceedings{pmlr-v202-tonin23a, title = {Extending Kernel {PCA} through Dualization: Sparsity, Robustness and Fast Algorithms}, author = {Tonin, Francesco and Lambert, Alex and Patrinos, Panagiotis and Suykens, Johan}, booktitle = {Proceedings of the 40th International Conference on Machine Learning}, pages = {34379--34393}, year = {2023}, editor = {Krause, Andreas and Brunskill, Emma and Cho, Kyunghyun and Engelhardt, Barbara and Sabato, Sivan and Scarlett, Jonathan}, volume = {202}, series = {Proceedings of Machine Learning Research}, month = {23--29 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v202/tonin23a/tonin23a.pdf}, url = {https://proceedings.mlr.press/v202/tonin23a.html}, abstract = {The goal of this paper is to revisit Kernel Principal Component Analysis (KPCA) through dualization of a difference of convex functions. This allows to naturally extend KPCA to multiple objective functions and leads to efficient gradient-based algorithms avoiding the expensive SVD of the Gram matrix. Particularly, we consider objective functions that can be written as Moreau envelopes, demonstrating how to promote robustness and sparsity within the same framework. The proposed method is evaluated on synthetic and realworld benchmarks, showing significant speedup in KPCA training time as well as highlighting the benefits in terms of robustness and sparsity.} }
Endnote
%0 Conference Paper %T Extending Kernel PCA through Dualization: Sparsity, Robustness and Fast Algorithms %A Francesco Tonin %A Alex Lambert %A Panagiotis Patrinos %A Johan Suykens %B Proceedings of the 40th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2023 %E Andreas Krause %E Emma Brunskill %E Kyunghyun Cho %E Barbara Engelhardt %E Sivan Sabato %E Jonathan Scarlett %F pmlr-v202-tonin23a %I PMLR %P 34379--34393 %U https://proceedings.mlr.press/v202/tonin23a.html %V 202 %X The goal of this paper is to revisit Kernel Principal Component Analysis (KPCA) through dualization of a difference of convex functions. This allows to naturally extend KPCA to multiple objective functions and leads to efficient gradient-based algorithms avoiding the expensive SVD of the Gram matrix. Particularly, we consider objective functions that can be written as Moreau envelopes, demonstrating how to promote robustness and sparsity within the same framework. The proposed method is evaluated on synthetic and realworld benchmarks, showing significant speedup in KPCA training time as well as highlighting the benefits in terms of robustness and sparsity.
APA
Tonin, F., Lambert, A., Patrinos, P. & Suykens, J.. (2023). Extending Kernel PCA through Dualization: Sparsity, Robustness and Fast Algorithms. Proceedings of the 40th International Conference on Machine Learning, in Proceedings of Machine Learning Research 202:34379-34393 Available from https://proceedings.mlr.press/v202/tonin23a.html.

Related Material