A unified view of likelihood ratio and reparameterization gradients

Paavo Parmas, Masashi Sugiyama
Proceedings of The 24th International Conference on Artificial Intelligence and Statistics, PMLR 130:4078-4086, 2021.

Abstract

Reparameterization (RP) and likelihood ratio (LR) gradient estimators are used to estimate gradients of expectations throughout machine learning and reinforcement learning; however, they are usually explained as simple mathematical tricks, with no insight into their nature. We use a first principles approach to explain that LR and RP are alternative methods of keeping track of the movement of probability mass, and the two are connected via the divergence theorem. Moreover, we show that the space of all possible estimators combining LR and RP can be completely parameterized by a flow field u(x) and importance sampling distribution q(x). We prove that there cannot exist a single-sample estimator of this type outside our characterized space, thus, clarifying where we should be searching for better Monte Carlo gradient estimators.

Cite this Paper


BibTeX
@InProceedings{pmlr-v130-parmas21a, title = { A unified view of likelihood ratio and reparameterization gradients }, author = {Parmas, Paavo and Sugiyama, Masashi}, booktitle = {Proceedings of The 24th International Conference on Artificial Intelligence and Statistics}, pages = {4078--4086}, year = {2021}, editor = {Banerjee, Arindam and Fukumizu, Kenji}, volume = {130}, series = {Proceedings of Machine Learning Research}, month = {13--15 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v130/parmas21a/parmas21a.pdf}, url = {https://proceedings.mlr.press/v130/parmas21a.html}, abstract = { Reparameterization (RP) and likelihood ratio (LR) gradient estimators are used to estimate gradients of expectations throughout machine learning and reinforcement learning; however, they are usually explained as simple mathematical tricks, with no insight into their nature. We use a first principles approach to explain that LR and RP are alternative methods of keeping track of the movement of probability mass, and the two are connected via the divergence theorem. Moreover, we show that the space of all possible estimators combining LR and RP can be completely parameterized by a flow field u(x) and importance sampling distribution q(x). We prove that there cannot exist a single-sample estimator of this type outside our characterized space, thus, clarifying where we should be searching for better Monte Carlo gradient estimators. } }
Endnote
%0 Conference Paper %T A unified view of likelihood ratio and reparameterization gradients %A Paavo Parmas %A Masashi Sugiyama %B Proceedings of The 24th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2021 %E Arindam Banerjee %E Kenji Fukumizu %F pmlr-v130-parmas21a %I PMLR %P 4078--4086 %U https://proceedings.mlr.press/v130/parmas21a.html %V 130 %X Reparameterization (RP) and likelihood ratio (LR) gradient estimators are used to estimate gradients of expectations throughout machine learning and reinforcement learning; however, they are usually explained as simple mathematical tricks, with no insight into their nature. We use a first principles approach to explain that LR and RP are alternative methods of keeping track of the movement of probability mass, and the two are connected via the divergence theorem. Moreover, we show that the space of all possible estimators combining LR and RP can be completely parameterized by a flow field u(x) and importance sampling distribution q(x). We prove that there cannot exist a single-sample estimator of this type outside our characterized space, thus, clarifying where we should be searching for better Monte Carlo gradient estimators.
APA
Parmas, P. & Sugiyama, M.. (2021). A unified view of likelihood ratio and reparameterization gradients . Proceedings of The 24th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 130:4078-4086 Available from https://proceedings.mlr.press/v130/parmas21a.html.

Related Material