Causal Dynamics Learning for Task-Independent State Abstraction

Zizhao Wang, Xuesu Xiao, Zifan Xu, Yuke Zhu, Peter Stone
Proceedings of the 39th International Conference on Machine Learning, PMLR 162:23151-23180, 2022.

Abstract

Learning dynamics models accurately is an important goal for Model-Based Reinforcement Learning (MBRL), but most MBRL methods learn a dense dynamics model which is vulnerable to spurious correlations and therefore generalizes poorly to unseen states. In this paper, we introduce Causal Dynamics Learning for Task-Independent State Abstraction (CDL), which first learns a theoretically proved causal dynamics model that removes unnecessary dependencies between state variables and the action, thus generalizing well to unseen states. A state abstraction can then be derived from the learned dynamics, which not only improves sample efficiency but also applies to a wider range of tasks than existing state abstraction methods. Evaluated on two simulated environments and downstream tasks, both the dynamics model and policies learned by the proposed method generalize well to unseen states and the derived state abstraction improves sample efficiency compared to learning without it.

Cite this Paper


BibTeX
@InProceedings{pmlr-v162-wang22ae, title = {Causal Dynamics Learning for Task-Independent State Abstraction}, author = {Wang, Zizhao and Xiao, Xuesu and Xu, Zifan and Zhu, Yuke and Stone, Peter}, booktitle = {Proceedings of the 39th International Conference on Machine Learning}, pages = {23151--23180}, year = {2022}, editor = {Chaudhuri, Kamalika and Jegelka, Stefanie and Song, Le and Szepesvari, Csaba and Niu, Gang and Sabato, Sivan}, volume = {162}, series = {Proceedings of Machine Learning Research}, month = {17--23 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v162/wang22ae/wang22ae.pdf}, url = {https://proceedings.mlr.press/v162/wang22ae.html}, abstract = {Learning dynamics models accurately is an important goal for Model-Based Reinforcement Learning (MBRL), but most MBRL methods learn a dense dynamics model which is vulnerable to spurious correlations and therefore generalizes poorly to unseen states. In this paper, we introduce Causal Dynamics Learning for Task-Independent State Abstraction (CDL), which first learns a theoretically proved causal dynamics model that removes unnecessary dependencies between state variables and the action, thus generalizing well to unseen states. A state abstraction can then be derived from the learned dynamics, which not only improves sample efficiency but also applies to a wider range of tasks than existing state abstraction methods. Evaluated on two simulated environments and downstream tasks, both the dynamics model and policies learned by the proposed method generalize well to unseen states and the derived state abstraction improves sample efficiency compared to learning without it.} }
Endnote
%0 Conference Paper %T Causal Dynamics Learning for Task-Independent State Abstraction %A Zizhao Wang %A Xuesu Xiao %A Zifan Xu %A Yuke Zhu %A Peter Stone %B Proceedings of the 39th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2022 %E Kamalika Chaudhuri %E Stefanie Jegelka %E Le Song %E Csaba Szepesvari %E Gang Niu %E Sivan Sabato %F pmlr-v162-wang22ae %I PMLR %P 23151--23180 %U https://proceedings.mlr.press/v162/wang22ae.html %V 162 %X Learning dynamics models accurately is an important goal for Model-Based Reinforcement Learning (MBRL), but most MBRL methods learn a dense dynamics model which is vulnerable to spurious correlations and therefore generalizes poorly to unseen states. In this paper, we introduce Causal Dynamics Learning for Task-Independent State Abstraction (CDL), which first learns a theoretically proved causal dynamics model that removes unnecessary dependencies between state variables and the action, thus generalizing well to unseen states. A state abstraction can then be derived from the learned dynamics, which not only improves sample efficiency but also applies to a wider range of tasks than existing state abstraction methods. Evaluated on two simulated environments and downstream tasks, both the dynamics model and policies learned by the proposed method generalize well to unseen states and the derived state abstraction improves sample efficiency compared to learning without it.
APA
Wang, Z., Xiao, X., Xu, Z., Zhu, Y. & Stone, P.. (2022). Causal Dynamics Learning for Task-Independent State Abstraction. Proceedings of the 39th International Conference on Machine Learning, in Proceedings of Machine Learning Research 162:23151-23180 Available from https://proceedings.mlr.press/v162/wang22ae.html.

Related Material