Trust Region Sequential Variational Inference

Geon-Hyeong Kim, Youngsoo Jang, Jongmin Lee, Wonseok Jeon, Hongseok Yang, Kee-Eung Kim
Proceedings of The Eleventh Asian Conference on Machine Learning, PMLR 101:1033-1048, 2019.

Abstract

Stochastic variational inference has emerged as an effective method for performing inference on or learning complex models for data. Yet, one of the challenges in stochastic variational inference is handling high-dimensional data, such as sequential data, and models with non-differentiable densities caused by, for instance, the use of discrete latent variables. In such cases, it is challenging to control the variance of the gradient estimator used in stochastic variational inference, while low variance is often one of the key properties needed for successful inference. In this work, we present a new algorithm for stochastic variational inference of sequential models which trades off bias for variance to tackle this challenge effectively. Our algorithm is inspired by variance reduction techniques in reinforcement learning, yet it uniquely adopts their key ideas in the context of stochastic variational inference. We demonstrate the effectiveness of our approach through formal analysis and experiments on synthetic and real-world datasets.

Cite this Paper


BibTeX
@InProceedings{pmlr-v101-kim19a, title = {Trust Region Sequential Variational Inference}, author = {Kim, Geon-Hyeong and Jang, Youngsoo and Lee, Jongmin and Jeon, Wonseok and Yang, Hongseok and Kim, Kee-Eung}, booktitle = {Proceedings of The Eleventh Asian Conference on Machine Learning}, pages = {1033--1048}, year = {2019}, editor = {Lee, Wee Sun and Suzuki, Taiji}, volume = {101}, series = {Proceedings of Machine Learning Research}, month = {17--19 Nov}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v101/kim19a/kim19a.pdf}, url = {https://proceedings.mlr.press/v101/kim19a.html}, abstract = {Stochastic variational inference has emerged as an effective method for performing inference on or learning complex models for data. Yet, one of the challenges in stochastic variational inference is handling high-dimensional data, such as sequential data, and models with non-differentiable densities caused by, for instance, the use of discrete latent variables. In such cases, it is challenging to control the variance of the gradient estimator used in stochastic variational inference, while low variance is often one of the key properties needed for successful inference. In this work, we present a new algorithm for stochastic variational inference of sequential models which trades off bias for variance to tackle this challenge effectively. Our algorithm is inspired by variance reduction techniques in reinforcement learning, yet it uniquely adopts their key ideas in the context of stochastic variational inference. We demonstrate the effectiveness of our approach through formal analysis and experiments on synthetic and real-world datasets.} }
Endnote
%0 Conference Paper %T Trust Region Sequential Variational Inference %A Geon-Hyeong Kim %A Youngsoo Jang %A Jongmin Lee %A Wonseok Jeon %A Hongseok Yang %A Kee-Eung Kim %B Proceedings of The Eleventh Asian Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2019 %E Wee Sun Lee %E Taiji Suzuki %F pmlr-v101-kim19a %I PMLR %P 1033--1048 %U https://proceedings.mlr.press/v101/kim19a.html %V 101 %X Stochastic variational inference has emerged as an effective method for performing inference on or learning complex models for data. Yet, one of the challenges in stochastic variational inference is handling high-dimensional data, such as sequential data, and models with non-differentiable densities caused by, for instance, the use of discrete latent variables. In such cases, it is challenging to control the variance of the gradient estimator used in stochastic variational inference, while low variance is often one of the key properties needed for successful inference. In this work, we present a new algorithm for stochastic variational inference of sequential models which trades off bias for variance to tackle this challenge effectively. Our algorithm is inspired by variance reduction techniques in reinforcement learning, yet it uniquely adopts their key ideas in the context of stochastic variational inference. We demonstrate the effectiveness of our approach through formal analysis and experiments on synthetic and real-world datasets.
APA
Kim, G., Jang, Y., Lee, J., Jeon, W., Yang, H. & Kim, K.. (2019). Trust Region Sequential Variational Inference. Proceedings of The Eleventh Asian Conference on Machine Learning, in Proceedings of Machine Learning Research 101:1033-1048 Available from https://proceedings.mlr.press/v101/kim19a.html.

Related Material