ActionConstrained Markov Decision Processes With KullbackLeibler Cost
[edit]
Proceedings of the 31st Conference On Learning Theory, PMLR 75:14311444, 2018.
Abstract
This paper concerns computation of optimal policies in which the onestep reward function contains a cost term that models KullbackLeibler divergence with respect to nominal dynamics. This technique was introduced by Todorov in 2007, where it was shown under general conditions that the solution to the averagereward optimality equations reduce to a simple eigenvector problem. Since then many authors have sought to apply this technique to control problems and models of bounded rationality in economics. A crucial assumption is that the input process is essentially unconstrained. For example, if the nominal dynamics include randomness from nature (e.g., the impact of wind on a moving vehicle), then the optimal control solution does not respect the exogenous nature of this disturbance. This paper introduces a technique to solve a more general class of actionconstrained MDPs. The main idea is to solve an entire parameterized family of MDPs, in which the parameter is a scalar weighting the onestep reward function. The approach is new and practical even in the original unconstrained formulation.
Related Material


