[edit]
Sparse Reinforcement Learning via Convex Optimization
Proceedings of the 31st International Conference on Machine Learning, PMLR 32(2):424-432, 2014.
Abstract
We propose two new algorithms for the sparse reinforcement learning problem based on different formulations. The first algorithm is an off-line method based on the alternating direction method of multipliers for solving a constrained formulation that explicitly controls the projected Bellman residual. The second algorithm is an online stochastic approximation algorithm that employs the regularized dual averaging technique, using the Lagrangian formulation. The convergence of both algorithms are established. We demonstrate the performance of these algorithms through two classical examples.