GeNGA: A Generalization of Natural Gradient Ascent with Positive and Negative Convergence Results

Philip Thomas
Proceedings of the 31st International Conference on Machine Learning, PMLR 32(2):1575-1583, 2014.

Abstract

Natural gradient ascent (NGA) is a popular optimization method that uses a positive definite metric tensor. In many applications the metric tensor is only guaranteed to be positive semidefinite (e.g., when using the Fisher information matrix as the metric tensor), in which case NGA is not applicable. In our first contribution, we derive generalized natural gradient ascent (GeNGA), a generalization of NGA which allows for positive semidefinite non-smooth metric tensors. In our second contribution we show that, in standard settings, GeNGA and NGA can both be divergent. We then establish sufficient conditions to ensure that both achieve various forms of convergence. In our third contribution we show how several reinforcement learning methods that use NGA without positive definite metric tensors can be adapted to properly use GeNGA.

Cite this Paper


BibTeX
@InProceedings{pmlr-v32-thomasb14, title = {GeNGA: A Generalization of Natural Gradient Ascent with Positive and Negative Convergence Results}, author = {Thomas, Philip}, booktitle = {Proceedings of the 31st International Conference on Machine Learning}, pages = {1575--1583}, year = {2014}, editor = {Xing, Eric P. and Jebara, Tony}, volume = {32}, number = {2}, series = {Proceedings of Machine Learning Research}, address = {Bejing, China}, month = {22--24 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v32/thomasb14.pdf}, url = {https://proceedings.mlr.press/v32/thomasb14.html}, abstract = {Natural gradient ascent (NGA) is a popular optimization method that uses a positive definite metric tensor. In many applications the metric tensor is only guaranteed to be positive semidefinite (e.g., when using the Fisher information matrix as the metric tensor), in which case NGA is not applicable. In our first contribution, we derive generalized natural gradient ascent (GeNGA), a generalization of NGA which allows for positive semidefinite non-smooth metric tensors. In our second contribution we show that, in standard settings, GeNGA and NGA can both be divergent. We then establish sufficient conditions to ensure that both achieve various forms of convergence. In our third contribution we show how several reinforcement learning methods that use NGA without positive definite metric tensors can be adapted to properly use GeNGA.} }
Endnote
%0 Conference Paper %T GeNGA: A Generalization of Natural Gradient Ascent with Positive and Negative Convergence Results %A Philip Thomas %B Proceedings of the 31st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2014 %E Eric P. Xing %E Tony Jebara %F pmlr-v32-thomasb14 %I PMLR %P 1575--1583 %U https://proceedings.mlr.press/v32/thomasb14.html %V 32 %N 2 %X Natural gradient ascent (NGA) is a popular optimization method that uses a positive definite metric tensor. In many applications the metric tensor is only guaranteed to be positive semidefinite (e.g., when using the Fisher information matrix as the metric tensor), in which case NGA is not applicable. In our first contribution, we derive generalized natural gradient ascent (GeNGA), a generalization of NGA which allows for positive semidefinite non-smooth metric tensors. In our second contribution we show that, in standard settings, GeNGA and NGA can both be divergent. We then establish sufficient conditions to ensure that both achieve various forms of convergence. In our third contribution we show how several reinforcement learning methods that use NGA without positive definite metric tensors can be adapted to properly use GeNGA.
RIS
TY - CPAPER TI - GeNGA: A Generalization of Natural Gradient Ascent with Positive and Negative Convergence Results AU - Philip Thomas BT - Proceedings of the 31st International Conference on Machine Learning DA - 2014/06/18 ED - Eric P. Xing ED - Tony Jebara ID - pmlr-v32-thomasb14 PB - PMLR DP - Proceedings of Machine Learning Research VL - 32 IS - 2 SP - 1575 EP - 1583 L1 - http://proceedings.mlr.press/v32/thomasb14.pdf UR - https://proceedings.mlr.press/v32/thomasb14.html AB - Natural gradient ascent (NGA) is a popular optimization method that uses a positive definite metric tensor. In many applications the metric tensor is only guaranteed to be positive semidefinite (e.g., when using the Fisher information matrix as the metric tensor), in which case NGA is not applicable. In our first contribution, we derive generalized natural gradient ascent (GeNGA), a generalization of NGA which allows for positive semidefinite non-smooth metric tensors. In our second contribution we show that, in standard settings, GeNGA and NGA can both be divergent. We then establish sufficient conditions to ensure that both achieve various forms of convergence. In our third contribution we show how several reinforcement learning methods that use NGA without positive definite metric tensors can be adapted to properly use GeNGA. ER -
APA
Thomas, P.. (2014). GeNGA: A Generalization of Natural Gradient Ascent with Positive and Negative Convergence Results. Proceedings of the 31st International Conference on Machine Learning, in Proceedings of Machine Learning Research 32(2):1575-1583 Available from https://proceedings.mlr.press/v32/thomasb14.html.

Related Material