29th Annual Conference on Learning Theory, PMLR 49:1639-1642, 2016.

Abstract

Planning plays an important role in the broad class of decision theory. Planning has drawn much attention in recent work in the robotics and sequential decision making areas. Recently, Reinforcement Learning (RL), as an agent-environment interaction problem, has brought further attention to planning methods. Generally in RL, one can assume a generative model, e.g. graphical models, for the environment, and then the task for the RL agent is to learn the model parameters and find the optimal strategy based on these learnt parameters. Based on environment behavior, the agent can assume various types of generative models, e.g. Multi Armed Bandit for a static environment, or Markov Decision Process (MDP) for a dynamic environment. The advantage of these popular models is their simplicity, which results in tractable methods of learning the parameters and finding the optimal policy. The drawback of these models is again their simplicity: these models usually underfit and underestimate the actual environment behavior. For example, in robotics, the agent usually has noisy observations of the environment inner state and MDP is not a suitable model. More complex models like Partially Observable Markov Decision Process (POMDP) can compensate for this drawback. Fitting this model to the environment, where the partial observation is given to the agent, generally gives dramatic performance improvement, sometimes unbounded improvement, compared to MDP. In general, finding the optimal policy for the POMDP model is computationally intractable and fully non convex, even for the class of memoryless policies. The open problem is to come up with a method to find an exact or an approximate optimal stochastic memoryless policy for POMDP models.

Cite this Paper

BibTeX

@InProceedings{pmlr-v49-azizzadenesheli16b,
title = {Open Problem: Approximate Planning of POMDPs in the class of Memoryless Policies},
author = {Azizzadenesheli, Kamyar and Lazaric, Alessandro and Anandkumar, Animashree},
booktitle = {29th Annual Conference on Learning Theory},
pages = {1639--1642},
year = {2016},
editor = {Feldman, Vitaly and Rakhlin, Alexander and Shamir, Ohad},
volume = {49},
series = {Proceedings of Machine Learning Research},
address = {Columbia University, New York, New York, USA},
month = {23--26 Jun},
publisher = {PMLR},
pdf = {http://proceedings.mlr.press/v49/azizzadenesheli16b.pdf},
url = {https://proceedings.mlr.press/v49/azizzadenesheli16b.html},
abstract = {Planning plays an important role in the broad class of decision theory. Planning has drawn much attention in recent work in the robotics and sequential decision making areas. Recently, Reinforcement Learning (RL), as an agent-environment interaction problem, has brought further attention to planning methods. Generally in RL, one can assume a generative model, e.g. graphical models, for the environment, and then the task for the RL agent is to learn the model parameters and find the optimal strategy based on these learnt parameters. Based on environment behavior, the agent can assume various types of generative models, e.g. Multi Armed Bandit for a static environment, or Markov Decision Process (MDP) for a dynamic environment. The advantage of these popular models is their simplicity, which results in tractable methods of learning the parameters and finding the optimal policy. The drawback of these models is again their simplicity: these models usually underfit and underestimate the actual environment behavior. For example, in robotics, the agent usually has noisy observations of the environment inner state and MDP is not a suitable model. More complex models like Partially Observable Markov Decision Process (POMDP) can compensate for this drawback. Fitting this model to the environment, where the partial observation is given to the agent, generally gives dramatic performance improvement, sometimes unbounded improvement, compared to MDP. In general, finding the optimal policy for the POMDP model is computationally intractable and fully non convex, even for the class of memoryless policies. The open problem is to come up with a method to find an exact or an approximate optimal stochastic memoryless policy for POMDP models.}
}

Endnote

%0 Conference Paper
%T Open Problem: Approximate Planning of POMDPs in the class of Memoryless Policies
%A Kamyar Azizzadenesheli
%A Alessandro Lazaric
%A Animashree Anandkumar
%B 29th Annual Conference on Learning Theory
%C Proceedings of Machine Learning Research
%D 2016
%E Vitaly Feldman
%E Alexander Rakhlin
%E Ohad Shamir
%F pmlr-v49-azizzadenesheli16b
%I PMLR
%P 1639--1642
%U https://proceedings.mlr.press/v49/azizzadenesheli16b.html
%V 49
%X Planning plays an important role in the broad class of decision theory. Planning has drawn much attention in recent work in the robotics and sequential decision making areas. Recently, Reinforcement Learning (RL), as an agent-environment interaction problem, has brought further attention to planning methods. Generally in RL, one can assume a generative model, e.g. graphical models, for the environment, and then the task for the RL agent is to learn the model parameters and find the optimal strategy based on these learnt parameters. Based on environment behavior, the agent can assume various types of generative models, e.g. Multi Armed Bandit for a static environment, or Markov Decision Process (MDP) for a dynamic environment. The advantage of these popular models is their simplicity, which results in tractable methods of learning the parameters and finding the optimal policy. The drawback of these models is again their simplicity: these models usually underfit and underestimate the actual environment behavior. For example, in robotics, the agent usually has noisy observations of the environment inner state and MDP is not a suitable model. More complex models like Partially Observable Markov Decision Process (POMDP) can compensate for this drawback. Fitting this model to the environment, where the partial observation is given to the agent, generally gives dramatic performance improvement, sometimes unbounded improvement, compared to MDP. In general, finding the optimal policy for the POMDP model is computationally intractable and fully non convex, even for the class of memoryless policies. The open problem is to come up with a method to find an exact or an approximate optimal stochastic memoryless policy for POMDP models.

RIS

TY - CPAPER
TI - Open Problem: Approximate Planning of POMDPs in the class of Memoryless Policies
AU - Kamyar Azizzadenesheli
AU - Alessandro Lazaric
AU - Animashree Anandkumar
BT - 29th Annual Conference on Learning Theory
DA - 2016/06/06
ED - Vitaly Feldman
ED - Alexander Rakhlin
ED - Ohad Shamir
ID - pmlr-v49-azizzadenesheli16b
PB - PMLR
DP - Proceedings of Machine Learning Research
VL - 49
SP - 1639
EP - 1642
L1 - http://proceedings.mlr.press/v49/azizzadenesheli16b.pdf
UR - https://proceedings.mlr.press/v49/azizzadenesheli16b.html
AB - Planning plays an important role in the broad class of decision theory. Planning has drawn much attention in recent work in the robotics and sequential decision making areas. Recently, Reinforcement Learning (RL), as an agent-environment interaction problem, has brought further attention to planning methods. Generally in RL, one can assume a generative model, e.g. graphical models, for the environment, and then the task for the RL agent is to learn the model parameters and find the optimal strategy based on these learnt parameters. Based on environment behavior, the agent can assume various types of generative models, e.g. Multi Armed Bandit for a static environment, or Markov Decision Process (MDP) for a dynamic environment. The advantage of these popular models is their simplicity, which results in tractable methods of learning the parameters and finding the optimal policy. The drawback of these models is again their simplicity: these models usually underfit and underestimate the actual environment behavior. For example, in robotics, the agent usually has noisy observations of the environment inner state and MDP is not a suitable model. More complex models like Partially Observable Markov Decision Process (POMDP) can compensate for this drawback. Fitting this model to the environment, where the partial observation is given to the agent, generally gives dramatic performance improvement, sometimes unbounded improvement, compared to MDP. In general, finding the optimal policy for the POMDP model is computationally intractable and fully non convex, even for the class of memoryless policies. The open problem is to come up with a method to find an exact or an approximate optimal stochastic memoryless policy for POMDP models.
ER -

APA

Azizzadenesheli, K., Lazaric, A. & Anandkumar, A.. (2016). Open Problem: Approximate Planning of POMDPs in the class of Memoryless Policies. 29th Annual Conference on Learning Theory, in Proceedings of Machine Learning Research 49:1639-1642 Available from https://proceedings.mlr.press/v49/azizzadenesheli16b.html.