Nonparametric Budgeted Stochastic Gradient Descent

Trung Le, Vu Nguyen, Tu Dinh Nguyen, Dinh Phung
Proceedings of the 19th International Conference on Artificial Intelligence and Statistics, PMLR 51:654-572, 2016.

Abstract

One of the most challenging problems in kernel online learning is to bound the model size. Budgeted kernel online learning addresses this issue by bounding the model size to a predefined budget. However, determining an appropriate value for such predefined budget is arduous. In this paper, we propose the Nonparametric Budgeted Stochastic Gradient Descent that allows the model size to automatically grow with data in a principled way. We provide theoretical analysis to show that our framework is guaranteed to converge for a large collection of loss functions (e.g. Hinge, Logistic, L2, L1, and \varepsilon-insensitive) which enables the proposed algorithm to perform both classification and regression tasks without hurting the ideal convergence rate O\left(\frac1T\right) of the standard Stochastic Gradient Descent. We validate our algorithm on the real-world datasets to consolidate the theoretical claims.

Cite this Paper


BibTeX
@InProceedings{pmlr-v51-le16, title = {Nonparametric Budgeted Stochastic Gradient Descent}, author = {Le, Trung and Nguyen, Vu and Nguyen, Tu Dinh and Phung, Dinh}, booktitle = {Proceedings of the 19th International Conference on Artificial Intelligence and Statistics}, pages = {654--572}, year = {2016}, editor = {Gretton, Arthur and Robert, Christian C.}, volume = {51}, series = {Proceedings of Machine Learning Research}, address = {Cadiz, Spain}, month = {09--11 May}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v51/le16.pdf}, url = {https://proceedings.mlr.press/v51/le16.html}, abstract = {One of the most challenging problems in kernel online learning is to bound the model size. Budgeted kernel online learning addresses this issue by bounding the model size to a predefined budget. However, determining an appropriate value for such predefined budget is arduous. In this paper, we propose the Nonparametric Budgeted Stochastic Gradient Descent that allows the model size to automatically grow with data in a principled way. We provide theoretical analysis to show that our framework is guaranteed to converge for a large collection of loss functions (e.g. Hinge, Logistic, L2, L1, and \varepsilon-insensitive) which enables the proposed algorithm to perform both classification and regression tasks without hurting the ideal convergence rate O\left(\frac1T\right) of the standard Stochastic Gradient Descent. We validate our algorithm on the real-world datasets to consolidate the theoretical claims.} }
Endnote
%0 Conference Paper %T Nonparametric Budgeted Stochastic Gradient Descent %A Trung Le %A Vu Nguyen %A Tu Dinh Nguyen %A Dinh Phung %B Proceedings of the 19th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2016 %E Arthur Gretton %E Christian C. Robert %F pmlr-v51-le16 %I PMLR %P 654--572 %U https://proceedings.mlr.press/v51/le16.html %V 51 %X One of the most challenging problems in kernel online learning is to bound the model size. Budgeted kernel online learning addresses this issue by bounding the model size to a predefined budget. However, determining an appropriate value for such predefined budget is arduous. In this paper, we propose the Nonparametric Budgeted Stochastic Gradient Descent that allows the model size to automatically grow with data in a principled way. We provide theoretical analysis to show that our framework is guaranteed to converge for a large collection of loss functions (e.g. Hinge, Logistic, L2, L1, and \varepsilon-insensitive) which enables the proposed algorithm to perform both classification and regression tasks without hurting the ideal convergence rate O\left(\frac1T\right) of the standard Stochastic Gradient Descent. We validate our algorithm on the real-world datasets to consolidate the theoretical claims.
RIS
TY - CPAPER TI - Nonparametric Budgeted Stochastic Gradient Descent AU - Trung Le AU - Vu Nguyen AU - Tu Dinh Nguyen AU - Dinh Phung BT - Proceedings of the 19th International Conference on Artificial Intelligence and Statistics DA - 2016/05/02 ED - Arthur Gretton ED - Christian C. Robert ID - pmlr-v51-le16 PB - PMLR DP - Proceedings of Machine Learning Research VL - 51 SP - 654 EP - 572 L1 - http://proceedings.mlr.press/v51/le16.pdf UR - https://proceedings.mlr.press/v51/le16.html AB - One of the most challenging problems in kernel online learning is to bound the model size. Budgeted kernel online learning addresses this issue by bounding the model size to a predefined budget. However, determining an appropriate value for such predefined budget is arduous. In this paper, we propose the Nonparametric Budgeted Stochastic Gradient Descent that allows the model size to automatically grow with data in a principled way. We provide theoretical analysis to show that our framework is guaranteed to converge for a large collection of loss functions (e.g. Hinge, Logistic, L2, L1, and \varepsilon-insensitive) which enables the proposed algorithm to perform both classification and regression tasks without hurting the ideal convergence rate O\left(\frac1T\right) of the standard Stochastic Gradient Descent. We validate our algorithm on the real-world datasets to consolidate the theoretical claims. ER -
APA
Le, T., Nguyen, V., Nguyen, T.D. & Phung, D.. (2016). Nonparametric Budgeted Stochastic Gradient Descent. Proceedings of the 19th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 51:654-572 Available from https://proceedings.mlr.press/v51/le16.html.

Related Material