Rich-Observation Reinforcement Learning with Continuous Latent Dynamics

Yuda Song, Lili Wu, Dylan J Foster, Akshay Krishnamurthy
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:46181-46237, 2024.

Abstract

Sample-efficiency and reliability remain major bottlenecks toward wide adoption of reinforcement learning algorithms in continuous settings with high-dimensional perceptual inputs. Toward addressing these challenges, we introduce a new theoretical framework, RichCLD (“Rich-Observation RL with Continuous Latent Dynamics”), in which the agent performs control based on high-dimensional observations, but the environment is governed by low-dimensional latent states and Lipschitz continuous dynamics. Our main contribution is a new algorithm for this setting that is provably statistically and computationally efficient. The core of our algorithm is a new representation learning objective; we show that prior representation learning schemes tailored to discrete dynamics do not naturally extend to the continuous setting. Our new objective is amenable to practical implementation, and empirically, we find that it compares favorably to prior schemes in a standard evaluation protocol. We further provide several insights into the statistical complexity of the RichCLD framework, in particular proving that certain notions of Lipschitzness that admit sample-efficient learning in the absence of rich observations are insufficient in the rich-observation setting.

Cite this Paper


BibTeX
@InProceedings{pmlr-v235-song24i, title = {Rich-Observation Reinforcement Learning with Continuous Latent Dynamics}, author = {Song, Yuda and Wu, Lili and Foster, Dylan J and Krishnamurthy, Akshay}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {46181--46237}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v235/main/assets/song24i/song24i.pdf}, url = {https://proceedings.mlr.press/v235/song24i.html}, abstract = {Sample-efficiency and reliability remain major bottlenecks toward wide adoption of reinforcement learning algorithms in continuous settings with high-dimensional perceptual inputs. Toward addressing these challenges, we introduce a new theoretical framework, RichCLD (“Rich-Observation RL with Continuous Latent Dynamics”), in which the agent performs control based on high-dimensional observations, but the environment is governed by low-dimensional latent states and Lipschitz continuous dynamics. Our main contribution is a new algorithm for this setting that is provably statistically and computationally efficient. The core of our algorithm is a new representation learning objective; we show that prior representation learning schemes tailored to discrete dynamics do not naturally extend to the continuous setting. Our new objective is amenable to practical implementation, and empirically, we find that it compares favorably to prior schemes in a standard evaluation protocol. We further provide several insights into the statistical complexity of the RichCLD framework, in particular proving that certain notions of Lipschitzness that admit sample-efficient learning in the absence of rich observations are insufficient in the rich-observation setting.} }
Endnote
%0 Conference Paper %T Rich-Observation Reinforcement Learning with Continuous Latent Dynamics %A Yuda Song %A Lili Wu %A Dylan J Foster %A Akshay Krishnamurthy %B Proceedings of the 41st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Ruslan Salakhutdinov %E Zico Kolter %E Katherine Heller %E Adrian Weller %E Nuria Oliver %E Jonathan Scarlett %E Felix Berkenkamp %F pmlr-v235-song24i %I PMLR %P 46181--46237 %U https://proceedings.mlr.press/v235/song24i.html %V 235 %X Sample-efficiency and reliability remain major bottlenecks toward wide adoption of reinforcement learning algorithms in continuous settings with high-dimensional perceptual inputs. Toward addressing these challenges, we introduce a new theoretical framework, RichCLD (“Rich-Observation RL with Continuous Latent Dynamics”), in which the agent performs control based on high-dimensional observations, but the environment is governed by low-dimensional latent states and Lipschitz continuous dynamics. Our main contribution is a new algorithm for this setting that is provably statistically and computationally efficient. The core of our algorithm is a new representation learning objective; we show that prior representation learning schemes tailored to discrete dynamics do not naturally extend to the continuous setting. Our new objective is amenable to practical implementation, and empirically, we find that it compares favorably to prior schemes in a standard evaluation protocol. We further provide several insights into the statistical complexity of the RichCLD framework, in particular proving that certain notions of Lipschitzness that admit sample-efficient learning in the absence of rich observations are insufficient in the rich-observation setting.
APA
Song, Y., Wu, L., Foster, D.J. & Krishnamurthy, A.. (2024). Rich-Observation Reinforcement Learning with Continuous Latent Dynamics. Proceedings of the 41st International Conference on Machine Learning, in Proceedings of Machine Learning Research 235:46181-46237 Available from https://proceedings.mlr.press/v235/song24i.html.

Related Material