Guided Learning of Nonconvex Models through Successive Functional Gradient Optimization
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:4921-4930, 2020.
This paper presents a framework of successive functional gradient optimization for training nonconvex models such as neural networks, where training is driven by mirror descent in a function space. We provide a theoretical analysis and empirical study of the training method derived from this framework. It is shown that the method leads to better performance than that of standard training techniques.