SGLB: Stochastic Gradient Langevin Boosting

Aleksei Ustimenko, Liudmila Prokhorenkova
Proceedings of the 38th International Conference on Machine Learning, PMLR 139:10487-10496, 2021.

Abstract

This paper introduces Stochastic Gradient Langevin Boosting (SGLB) - a powerful and efficient machine learning framework that may deal with a wide range of loss functions and has provable generalization guarantees. The method is based on a special form of the Langevin diffusion equation specifically designed for gradient boosting. This allows us to theoretically guarantee the global convergence even for multimodal loss functions, while standard gradient boosting algorithms can guarantee only local optimum. We also empirically show that SGLB outperforms classic gradient boosting when applied to classification tasks with 0-1 loss function, which is known to be multimodal.

Cite this Paper


BibTeX
@InProceedings{pmlr-v139-ustimenko21a, title = {SGLB: Stochastic Gradient Langevin Boosting}, author = {Ustimenko, Aleksei and Prokhorenkova, Liudmila}, booktitle = {Proceedings of the 38th International Conference on Machine Learning}, pages = {10487--10496}, year = {2021}, editor = {Meila, Marina and Zhang, Tong}, volume = {139}, series = {Proceedings of Machine Learning Research}, month = {18--24 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v139/ustimenko21a/ustimenko21a.pdf}, url = {https://proceedings.mlr.press/v139/ustimenko21a.html}, abstract = {This paper introduces Stochastic Gradient Langevin Boosting (SGLB) - a powerful and efficient machine learning framework that may deal with a wide range of loss functions and has provable generalization guarantees. The method is based on a special form of the Langevin diffusion equation specifically designed for gradient boosting. This allows us to theoretically guarantee the global convergence even for multimodal loss functions, while standard gradient boosting algorithms can guarantee only local optimum. We also empirically show that SGLB outperforms classic gradient boosting when applied to classification tasks with 0-1 loss function, which is known to be multimodal.} }
Endnote
%0 Conference Paper %T SGLB: Stochastic Gradient Langevin Boosting %A Aleksei Ustimenko %A Liudmila Prokhorenkova %B Proceedings of the 38th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2021 %E Marina Meila %E Tong Zhang %F pmlr-v139-ustimenko21a %I PMLR %P 10487--10496 %U https://proceedings.mlr.press/v139/ustimenko21a.html %V 139 %X This paper introduces Stochastic Gradient Langevin Boosting (SGLB) - a powerful and efficient machine learning framework that may deal with a wide range of loss functions and has provable generalization guarantees. The method is based on a special form of the Langevin diffusion equation specifically designed for gradient boosting. This allows us to theoretically guarantee the global convergence even for multimodal loss functions, while standard gradient boosting algorithms can guarantee only local optimum. We also empirically show that SGLB outperforms classic gradient boosting when applied to classification tasks with 0-1 loss function, which is known to be multimodal.
APA
Ustimenko, A. & Prokhorenkova, L.. (2021). SGLB: Stochastic Gradient Langevin Boosting. Proceedings of the 38th International Conference on Machine Learning, in Proceedings of Machine Learning Research 139:10487-10496 Available from https://proceedings.mlr.press/v139/ustimenko21a.html.

Related Material