Guided Learning of Nonconvex Models through Successive Functional Gradient Optimization

Rie Johnson, Tong Zhang
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:4921-4930, 2020.

Abstract

This paper presents a framework of successive functional gradient optimization for training nonconvex models such as neural networks, where training is driven by mirror descent in a function space. We provide a theoretical analysis and empirical study of the training method derived from this framework. It is shown that the method leads to better performance than that of standard training techniques.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-johnson20b, title = {Guided Learning of Nonconvex Models through Successive Functional Gradient Optimization}, author = {Johnson, Rie and Zhang, Tong}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {4921--4930}, year = {2020}, editor = {III, Hal Daumé and Singh, Aarti}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/johnson20b/johnson20b.pdf}, url = {https://proceedings.mlr.press/v119/johnson20b.html}, abstract = {This paper presents a framework of successive functional gradient optimization for training nonconvex models such as neural networks, where training is driven by mirror descent in a function space. We provide a theoretical analysis and empirical study of the training method derived from this framework. It is shown that the method leads to better performance than that of standard training techniques.} }
Endnote
%0 Conference Paper %T Guided Learning of Nonconvex Models through Successive Functional Gradient Optimization %A Rie Johnson %A Tong Zhang %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-johnson20b %I PMLR %P 4921--4930 %U https://proceedings.mlr.press/v119/johnson20b.html %V 119 %X This paper presents a framework of successive functional gradient optimization for training nonconvex models such as neural networks, where training is driven by mirror descent in a function space. We provide a theoretical analysis and empirical study of the training method derived from this framework. It is shown that the method leads to better performance than that of standard training techniques.
APA
Johnson, R. & Zhang, T.. (2020). Guided Learning of Nonconvex Models through Successive Functional Gradient Optimization. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:4921-4930 Available from https://proceedings.mlr.press/v119/johnson20b.html.

Related Material