Reinforcement Learning of POMDPs using Spectral Methods

Kamyar Azizzadenesheli, Alessandro Lazaric, Animashree Anandkumar
29th Annual Conference on Learning Theory, PMLR 49:193-256, 2016.

Abstract

We propose a new reinforcement learning algorithm for partially observable Markov decision processes (POMDP) based on spectral decomposition methods. While spectral methods have been previously employed for consistent learning of (passive) latent variable models such as hidden Markov models, POMDPs are more challenging since the learner interacts with the environment and possibly changes the future observations in the process. We devise a learning algorithm running through episodes, in each episode we employ spectral techniques to learn the POMDP parameters from a trajectory generated by a fixed policy. At the end of the episode, an optimization oracle returns the optimal memoryless planning policy which maximizes the expected reward based on the estimated POMDP model. We prove an order-optimal regret bound w.r.t. the optimal memoryless policy and efficient scaling with respect to the dimensionality of observation and action spaces.

Cite this Paper


BibTeX
@InProceedings{pmlr-v49-azizzadenesheli16a, title = {Reinforcement Learning of POMDPs using Spectral Methods}, author = {Azizzadenesheli, Kamyar and Lazaric, Alessandro and Anandkumar, Animashree}, booktitle = {29th Annual Conference on Learning Theory}, pages = {193--256}, year = {2016}, editor = {Feldman, Vitaly and Rakhlin, Alexander and Shamir, Ohad}, volume = {49}, series = {Proceedings of Machine Learning Research}, address = {Columbia University, New York, New York, USA}, month = {23--26 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v49/azizzadenesheli16a.pdf}, url = {https://proceedings.mlr.press/v49/azizzadenesheli16a.html}, abstract = {We propose a new reinforcement learning algorithm for partially observable Markov decision processes (POMDP) based on spectral decomposition methods. While spectral methods have been previously employed for consistent learning of (passive) latent variable models such as hidden Markov models, POMDPs are more challenging since the learner interacts with the environment and possibly changes the future observations in the process. We devise a learning algorithm running through episodes, in each episode we employ spectral techniques to learn the POMDP parameters from a trajectory generated by a fixed policy. At the end of the episode, an optimization oracle returns the optimal memoryless planning policy which maximizes the expected reward based on the estimated POMDP model. We prove an order-optimal regret bound w.r.t. the optimal memoryless policy and efficient scaling with respect to the dimensionality of observation and action spaces.} }
Endnote
%0 Conference Paper %T Reinforcement Learning of POMDPs using Spectral Methods %A Kamyar Azizzadenesheli %A Alessandro Lazaric %A Animashree Anandkumar %B 29th Annual Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2016 %E Vitaly Feldman %E Alexander Rakhlin %E Ohad Shamir %F pmlr-v49-azizzadenesheli16a %I PMLR %P 193--256 %U https://proceedings.mlr.press/v49/azizzadenesheli16a.html %V 49 %X We propose a new reinforcement learning algorithm for partially observable Markov decision processes (POMDP) based on spectral decomposition methods. While spectral methods have been previously employed for consistent learning of (passive) latent variable models such as hidden Markov models, POMDPs are more challenging since the learner interacts with the environment and possibly changes the future observations in the process. We devise a learning algorithm running through episodes, in each episode we employ spectral techniques to learn the POMDP parameters from a trajectory generated by a fixed policy. At the end of the episode, an optimization oracle returns the optimal memoryless planning policy which maximizes the expected reward based on the estimated POMDP model. We prove an order-optimal regret bound w.r.t. the optimal memoryless policy and efficient scaling with respect to the dimensionality of observation and action spaces.
RIS
TY - CPAPER TI - Reinforcement Learning of POMDPs using Spectral Methods AU - Kamyar Azizzadenesheli AU - Alessandro Lazaric AU - Animashree Anandkumar BT - 29th Annual Conference on Learning Theory DA - 2016/06/06 ED - Vitaly Feldman ED - Alexander Rakhlin ED - Ohad Shamir ID - pmlr-v49-azizzadenesheli16a PB - PMLR DP - Proceedings of Machine Learning Research VL - 49 SP - 193 EP - 256 L1 - http://proceedings.mlr.press/v49/azizzadenesheli16a.pdf UR - https://proceedings.mlr.press/v49/azizzadenesheli16a.html AB - We propose a new reinforcement learning algorithm for partially observable Markov decision processes (POMDP) based on spectral decomposition methods. While spectral methods have been previously employed for consistent learning of (passive) latent variable models such as hidden Markov models, POMDPs are more challenging since the learner interacts with the environment and possibly changes the future observations in the process. We devise a learning algorithm running through episodes, in each episode we employ spectral techniques to learn the POMDP parameters from a trajectory generated by a fixed policy. At the end of the episode, an optimization oracle returns the optimal memoryless planning policy which maximizes the expected reward based on the estimated POMDP model. We prove an order-optimal regret bound w.r.t. the optimal memoryless policy and efficient scaling with respect to the dimensionality of observation and action spaces. ER -
APA
Azizzadenesheli, K., Lazaric, A. & Anandkumar, A.. (2016). Reinforcement Learning of POMDPs using Spectral Methods. 29th Annual Conference on Learning Theory, in Proceedings of Machine Learning Research 49:193-256 Available from https://proceedings.mlr.press/v49/azizzadenesheli16a.html.

Related Material