Towards Governing Agent’s Efficacy: Action-Conditional $β$-VAE for Deep Transparent Reinforcement Learning

John Yang, Gyuejeong Lee, Simyung Chang, Nojun Kwak
Proceedings of The Eleventh Asian Conference on Machine Learning, PMLR 101:32-47, 2019.

Abstract

We tackle the blackbox issue of deep neural networks in the settings of reinforcement learning (RL) where neural agents learn towards maximizing reward gains in an uncontrollable way. Such learning approach is risky when the interacting environment includes an expanse of state space because it is then almost impossible to foresee all unwanted outcomes and penalize them with negative rewards beforehand. We propose Action-conditional $\beta$-VAE (AC-$\beta$-VAE) that allows succinct mappings of action-dependent factors in desirable dimensions of latent representations while disentangling environmental factors. Our proposed method tackles the blackbox issue by encouraging an RL policy network to learn interpretable latent features by distinguits influenshing ices from uncontrollable environmental factors, which closely resembles the way humans understand their scenes. Our experimental results show that the learned latent factors not only are interpretable, but also enable modeling the distribution of entire visited state-action space. We have experimented that this characteristic of the proposed structure can lead to ex post facto governance for desired behaviors of RL agents.

Cite this Paper


BibTeX
@InProceedings{pmlr-v101-yang19a, title = {Towards Governing Agent’s Efficacy: Action-Conditional $β$-VAE for Deep Transparent Reinforcement Learning}, author = {Yang, John and Lee, Gyuejeong and Chang, Simyung and Kwak, Nojun}, booktitle = {Proceedings of The Eleventh Asian Conference on Machine Learning}, pages = {32--47}, year = {2019}, editor = {Lee, Wee Sun and Suzuki, Taiji}, volume = {101}, series = {Proceedings of Machine Learning Research}, month = {17--19 Nov}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v101/yang19a/yang19a.pdf}, url = {https://proceedings.mlr.press/v101/yang19a.html}, abstract = {We tackle the blackbox issue of deep neural networks in the settings of reinforcement learning (RL) where neural agents learn towards maximizing reward gains in an uncontrollable way. Such learning approach is risky when the interacting environment includes an expanse of state space because it is then almost impossible to foresee all unwanted outcomes and penalize them with negative rewards beforehand. We propose Action-conditional $\beta$-VAE (AC-$\beta$-VAE) that allows succinct mappings of action-dependent factors in desirable dimensions of latent representations while disentangling environmental factors. Our proposed method tackles the blackbox issue by encouraging an RL policy network to learn interpretable latent features by distinguits influenshing ices from uncontrollable environmental factors, which closely resembles the way humans understand their scenes. Our experimental results show that the learned latent factors not only are interpretable, but also enable modeling the distribution of entire visited state-action space. We have experimented that this characteristic of the proposed structure can lead to ex post facto governance for desired behaviors of RL agents.} }
Endnote
%0 Conference Paper %T Towards Governing Agent’s Efficacy: Action-Conditional $β$-VAE for Deep Transparent Reinforcement Learning %A John Yang %A Gyuejeong Lee %A Simyung Chang %A Nojun Kwak %B Proceedings of The Eleventh Asian Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2019 %E Wee Sun Lee %E Taiji Suzuki %F pmlr-v101-yang19a %I PMLR %P 32--47 %U https://proceedings.mlr.press/v101/yang19a.html %V 101 %X We tackle the blackbox issue of deep neural networks in the settings of reinforcement learning (RL) where neural agents learn towards maximizing reward gains in an uncontrollable way. Such learning approach is risky when the interacting environment includes an expanse of state space because it is then almost impossible to foresee all unwanted outcomes and penalize them with negative rewards beforehand. We propose Action-conditional $\beta$-VAE (AC-$\beta$-VAE) that allows succinct mappings of action-dependent factors in desirable dimensions of latent representations while disentangling environmental factors. Our proposed method tackles the blackbox issue by encouraging an RL policy network to learn interpretable latent features by distinguits influenshing ices from uncontrollable environmental factors, which closely resembles the way humans understand their scenes. Our experimental results show that the learned latent factors not only are interpretable, but also enable modeling the distribution of entire visited state-action space. We have experimented that this characteristic of the proposed structure can lead to ex post facto governance for desired behaviors of RL agents.
APA
Yang, J., Lee, G., Chang, S. & Kwak, N.. (2019). Towards Governing Agent’s Efficacy: Action-Conditional $β$-VAE for Deep Transparent Reinforcement Learning. Proceedings of The Eleventh Asian Conference on Machine Learning, in Proceedings of Machine Learning Research 101:32-47 Available from https://proceedings.mlr.press/v101/yang19a.html.

Related Material