A Case of Exponential Convergence Rates for SVM

Vivien Cabannnes, Stefano Vigogna
Proceedings of The 26th International Conference on Artificial Intelligence and Statistics, PMLR 206:359-374, 2023.

Abstract

Optimizing the misclassification risk is in general NP-hard. Tractable solvers can be obtained by considering a surrogate regression problem. While convergence to the regression function is typically sublinear, the corresponding classification error can decay much faster. Fast and super fast rates (up to exponential) have been established for general smooth losses on problems where a hard margin is present between classes. This leaves out models based on non-smooth losses such as support vector machines, and problems where there is no hard margin, begging several questions. Are such models incapable of fast convergence? Are they therefore structurally inferior? Is the hard margin condition really necessary to obtain exponential convergence? Developing a new strategy, we provide an answer to these questions. In particular, we show not only that support vector machines can indeed converge exponentially fast, but also that they can do so even without hard margin.

Cite this Paper


BibTeX
@InProceedings{pmlr-v206-cabannnes23a, title = {A Case of Exponential Convergence Rates for SVM}, author = {Cabannnes, Vivien and Vigogna, Stefano}, booktitle = {Proceedings of The 26th International Conference on Artificial Intelligence and Statistics}, pages = {359--374}, year = {2023}, editor = {Ruiz, Francisco and Dy, Jennifer and van de Meent, Jan-Willem}, volume = {206}, series = {Proceedings of Machine Learning Research}, month = {25--27 Apr}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v206/cabannnes23a/cabannnes23a.pdf}, url = {https://proceedings.mlr.press/v206/cabannnes23a.html}, abstract = {Optimizing the misclassification risk is in general NP-hard. Tractable solvers can be obtained by considering a surrogate regression problem. While convergence to the regression function is typically sublinear, the corresponding classification error can decay much faster. Fast and super fast rates (up to exponential) have been established for general smooth losses on problems where a hard margin is present between classes. This leaves out models based on non-smooth losses such as support vector machines, and problems where there is no hard margin, begging several questions. Are such models incapable of fast convergence? Are they therefore structurally inferior? Is the hard margin condition really necessary to obtain exponential convergence? Developing a new strategy, we provide an answer to these questions. In particular, we show not only that support vector machines can indeed converge exponentially fast, but also that they can do so even without hard margin.} }
Endnote
%0 Conference Paper %T A Case of Exponential Convergence Rates for SVM %A Vivien Cabannnes %A Stefano Vigogna %B Proceedings of The 26th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2023 %E Francisco Ruiz %E Jennifer Dy %E Jan-Willem van de Meent %F pmlr-v206-cabannnes23a %I PMLR %P 359--374 %U https://proceedings.mlr.press/v206/cabannnes23a.html %V 206 %X Optimizing the misclassification risk is in general NP-hard. Tractable solvers can be obtained by considering a surrogate regression problem. While convergence to the regression function is typically sublinear, the corresponding classification error can decay much faster. Fast and super fast rates (up to exponential) have been established for general smooth losses on problems where a hard margin is present between classes. This leaves out models based on non-smooth losses such as support vector machines, and problems where there is no hard margin, begging several questions. Are such models incapable of fast convergence? Are they therefore structurally inferior? Is the hard margin condition really necessary to obtain exponential convergence? Developing a new strategy, we provide an answer to these questions. In particular, we show not only that support vector machines can indeed converge exponentially fast, but also that they can do so even without hard margin.
APA
Cabannnes, V. & Vigogna, S.. (2023). A Case of Exponential Convergence Rates for SVM. Proceedings of The 26th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 206:359-374 Available from https://proceedings.mlr.press/v206/cabannnes23a.html.

Related Material