Sparse Reinforcement Learning via Convex Optimization

Zhiwei Qin, Weichang Li, Firdaus Janoos
Proceedings of the 31st International Conference on Machine Learning, PMLR 32(2):424-432, 2014.

Abstract

We propose two new algorithms for the sparse reinforcement learning problem based on different formulations. The first algorithm is an off-line method based on the alternating direction method of multipliers for solving a constrained formulation that explicitly controls the projected Bellman residual. The second algorithm is an online stochastic approximation algorithm that employs the regularized dual averaging technique, using the Lagrangian formulation. The convergence of both algorithms are established. We demonstrate the performance of these algorithms through two classical examples.

Cite this Paper


BibTeX
@InProceedings{pmlr-v32-qin14, title = {Sparse Reinforcement Learning via Convex Optimization}, author = {Qin, Zhiwei and Li, Weichang and Janoos, Firdaus}, booktitle = {Proceedings of the 31st International Conference on Machine Learning}, pages = {424--432}, year = {2014}, editor = {Xing, Eric P. and Jebara, Tony}, volume = {32}, number = {2}, series = {Proceedings of Machine Learning Research}, address = {Bejing, China}, month = {22--24 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v32/qin14.pdf}, url = {https://proceedings.mlr.press/v32/qin14.html}, abstract = {We propose two new algorithms for the sparse reinforcement learning problem based on different formulations. The first algorithm is an off-line method based on the alternating direction method of multipliers for solving a constrained formulation that explicitly controls the projected Bellman residual. The second algorithm is an online stochastic approximation algorithm that employs the regularized dual averaging technique, using the Lagrangian formulation. The convergence of both algorithms are established. We demonstrate the performance of these algorithms through two classical examples.} }
Endnote
%0 Conference Paper %T Sparse Reinforcement Learning via Convex Optimization %A Zhiwei Qin %A Weichang Li %A Firdaus Janoos %B Proceedings of the 31st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2014 %E Eric P. Xing %E Tony Jebara %F pmlr-v32-qin14 %I PMLR %P 424--432 %U https://proceedings.mlr.press/v32/qin14.html %V 32 %N 2 %X We propose two new algorithms for the sparse reinforcement learning problem based on different formulations. The first algorithm is an off-line method based on the alternating direction method of multipliers for solving a constrained formulation that explicitly controls the projected Bellman residual. The second algorithm is an online stochastic approximation algorithm that employs the regularized dual averaging technique, using the Lagrangian formulation. The convergence of both algorithms are established. We demonstrate the performance of these algorithms through two classical examples.
RIS
TY - CPAPER TI - Sparse Reinforcement Learning via Convex Optimization AU - Zhiwei Qin AU - Weichang Li AU - Firdaus Janoos BT - Proceedings of the 31st International Conference on Machine Learning DA - 2014/06/18 ED - Eric P. Xing ED - Tony Jebara ID - pmlr-v32-qin14 PB - PMLR DP - Proceedings of Machine Learning Research VL - 32 IS - 2 SP - 424 EP - 432 L1 - http://proceedings.mlr.press/v32/qin14.pdf UR - https://proceedings.mlr.press/v32/qin14.html AB - We propose two new algorithms for the sparse reinforcement learning problem based on different formulations. The first algorithm is an off-line method based on the alternating direction method of multipliers for solving a constrained formulation that explicitly controls the projected Bellman residual. The second algorithm is an online stochastic approximation algorithm that employs the regularized dual averaging technique, using the Lagrangian formulation. The convergence of both algorithms are established. We demonstrate the performance of these algorithms through two classical examples. ER -
APA
Qin, Z., Li, W. & Janoos, F.. (2014). Sparse Reinforcement Learning via Convex Optimization. Proceedings of the 31st International Conference on Machine Learning, in Proceedings of Machine Learning Research 32(2):424-432 Available from https://proceedings.mlr.press/v32/qin14.html.

Related Material