Scalable Gradients and Variational Inference for Stochastic Differential Equations

Xuechen Li, Ting-Kam Leonard Wong, Ricky T. Q. Chen, David K. Duvenaud
Proceedings of The 2nd Symposium on Advances in Approximate Bayesian Inference, PMLR 118:1-28, 2020.

Abstract

We derive reverse-mode (or adjoint) automatic differentiation for solutions of stochastic differential equations (SDEs), allowing time-efficient and constant-memory computation of pathwise gradients, a continuous-time analogue of the reparameterization trick. Specifically, we construct a backward SDE whose solution is the gradient and provide conditions under which numerical solutions converge. We also combine our stochastic adjoint approach with a stochastic variational inference scheme for continuous-time SDE models, allowing us to learn distributions over functions using stochastic gradient descent. Our latent SDE model achieves competitive performance compared to existing approaches on time series modeling.

Cite this Paper


BibTeX
@InProceedings{pmlr-v118-li20a, title = {Scalable Gradients and Variational Inference for Stochastic Differential Equations }, author = {Li, Xuechen and Wong, Ting-Kam Leonard and Chen, Ricky T. Q. and Duvenaud, David K.}, booktitle = {Proceedings of The 2nd Symposium on Advances in Approximate Bayesian Inference}, pages = {1--28}, year = {2020}, editor = {Zhang, Cheng and Ruiz, Francisco and Bui, Thang and Dieng, Adji Bousso and Liang, Dawen}, volume = {118}, series = {Proceedings of Machine Learning Research}, month = {08 Dec}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v118/li20a/li20a.pdf}, url = {https://proceedings.mlr.press/v118/li20a.html}, abstract = { We derive reverse-mode (or adjoint) automatic differentiation for solutions of stochastic differential equations (SDEs), allowing time-efficient and constant-memory computation of pathwise gradients, a continuous-time analogue of the reparameterization trick. Specifically, we construct a backward SDE whose solution is the gradient and provide conditions under which numerical solutions converge. We also combine our stochastic adjoint approach with a stochastic variational inference scheme for continuous-time SDE models, allowing us to learn distributions over functions using stochastic gradient descent. Our latent SDE model achieves competitive performance compared to existing approaches on time series modeling.} }
Endnote
%0 Conference Paper %T Scalable Gradients and Variational Inference for Stochastic Differential Equations %A Xuechen Li %A Ting-Kam Leonard Wong %A Ricky T. Q. Chen %A David K. Duvenaud %B Proceedings of The 2nd Symposium on Advances in Approximate Bayesian Inference %C Proceedings of Machine Learning Research %D 2020 %E Cheng Zhang %E Francisco Ruiz %E Thang Bui %E Adji Bousso Dieng %E Dawen Liang %F pmlr-v118-li20a %I PMLR %P 1--28 %U https://proceedings.mlr.press/v118/li20a.html %V 118 %X We derive reverse-mode (or adjoint) automatic differentiation for solutions of stochastic differential equations (SDEs), allowing time-efficient and constant-memory computation of pathwise gradients, a continuous-time analogue of the reparameterization trick. Specifically, we construct a backward SDE whose solution is the gradient and provide conditions under which numerical solutions converge. We also combine our stochastic adjoint approach with a stochastic variational inference scheme for continuous-time SDE models, allowing us to learn distributions over functions using stochastic gradient descent. Our latent SDE model achieves competitive performance compared to existing approaches on time series modeling.
APA
Li, X., Wong, T.L., Chen, R.T.Q. & Duvenaud, D.K.. (2020). Scalable Gradients and Variational Inference for Stochastic Differential Equations . Proceedings of The 2nd Symposium on Advances in Approximate Bayesian Inference, in Proceedings of Machine Learning Research 118:1-28 Available from https://proceedings.mlr.press/v118/li20a.html.

Related Material