Generalization Guarantees for Sparse Kernel Approximation with Entropic Optimal Features

Liang Ding, Rui Tuo, Shahin Shahrampour
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:2545-2555, 2020.

Abstract

Despite their success, kernel methods suffer from a massive computational cost in practice. In this paper, in lieu of commonly used kernel expansion with respect to $N$ inputs, we develop a novel optimal design maximizing the entropy among kernel features. This procedure results in a kernel expansion with respect to entropic optimal features (EOF), improving the data representation dramatically due to features dissimilarity. Under mild technical assumptions, our generalization bound shows that with only $O(N^{\frac{1}{4}})$ features (disregarding logarithmic factors), we can achieve the optimal statistical accuracy (i.e., $O(1/\sqrt{N})$). The salient feature of our design is its sparsity that significantly reduces the time and space costs. Our numerical experiments on benchmark datasets verify the superiority of EOF over the state-of-the-art in kernel approximation.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-ding20b, title = {Generalization Guarantees for Sparse Kernel Approximation with Entropic Optimal Features}, author = {Ding, Liang and Tuo, Rui and Shahrampour, Shahin}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {2545--2555}, year = {2020}, editor = {Hal Daumé III and Aarti Singh}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/ding20b/ding20b.pdf}, url = { http://proceedings.mlr.press/v119/ding20b.html }, abstract = {Despite their success, kernel methods suffer from a massive computational cost in practice. In this paper, in lieu of commonly used kernel expansion with respect to $N$ inputs, we develop a novel optimal design maximizing the entropy among kernel features. This procedure results in a kernel expansion with respect to entropic optimal features (EOF), improving the data representation dramatically due to features dissimilarity. Under mild technical assumptions, our generalization bound shows that with only $O(N^{\frac{1}{4}})$ features (disregarding logarithmic factors), we can achieve the optimal statistical accuracy (i.e., $O(1/\sqrt{N})$). The salient feature of our design is its sparsity that significantly reduces the time and space costs. Our numerical experiments on benchmark datasets verify the superiority of EOF over the state-of-the-art in kernel approximation.} }
Endnote
%0 Conference Paper %T Generalization Guarantees for Sparse Kernel Approximation with Entropic Optimal Features %A Liang Ding %A Rui Tuo %A Shahin Shahrampour %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-ding20b %I PMLR %P 2545--2555 %U http://proceedings.mlr.press/v119/ding20b.html %V 119 %X Despite their success, kernel methods suffer from a massive computational cost in practice. In this paper, in lieu of commonly used kernel expansion with respect to $N$ inputs, we develop a novel optimal design maximizing the entropy among kernel features. This procedure results in a kernel expansion with respect to entropic optimal features (EOF), improving the data representation dramatically due to features dissimilarity. Under mild technical assumptions, our generalization bound shows that with only $O(N^{\frac{1}{4}})$ features (disregarding logarithmic factors), we can achieve the optimal statistical accuracy (i.e., $O(1/\sqrt{N})$). The salient feature of our design is its sparsity that significantly reduces the time and space costs. Our numerical experiments on benchmark datasets verify the superiority of EOF over the state-of-the-art in kernel approximation.
APA
Ding, L., Tuo, R. & Shahrampour, S.. (2020). Generalization Guarantees for Sparse Kernel Approximation with Entropic Optimal Features. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:2545-2555 Available from http://proceedings.mlr.press/v119/ding20b.html .

Related Material