Scaling Up Approximate Value Iteration with Options: Better Policies with Fewer Iterations


Timothy Mann, Shie Mannor ;
Proceedings of the 31st International Conference on Machine Learning, PMLR 32(1):127-135, 2014.


We show how options, a class of control structures encompassing primitive and temporally extended actions, can play a valuable role in planning in MDPs with continuous state-spaces. Analyzing the convergence rate of Approximate Value Iteration with options reveals that for pessimistic initial value function estimates, options can speed up convergence compared to planning with only primitive actions even when the temporally extended actions are suboptimal and sparsely scattered throughout the state-space. Our experimental results in an optimal replacement task and a complex inventory management task demonstrate the potential for options to speed up convergence in practice. We show that options induce faster convergence to the optimal value function, which implies deriving better policies with fewer iterations.

Related Material