Learning RoboCup-Keepaway with Kernels

Tobias Jung, Daniel Polani
Gaussian Processes in Practice, PMLR 1:33-57, 2007.

Abstract

We apply kernel-based methods to solve the difficult reinforcement learning problem of 3vs2 keepaway in RoboCup simulated soccer. Key challenges in keepaway are the highdimensionality of the state space (rendering conventional discretization-based function approximation like tilecoding infeasible), the stochasticity due to noise and multiple learning agents needing to cooperate (meaning that the exact dynamics of the environment are unknown) and real-time learning (meaning that an efficient online implementation is required). We employ the general framework of approximate policy iteration with least-squares-based policy evaluation. As underlying function approximator we consider the family of regularization networks with subset of regressors approximation. The core of our proposed solution is an efficient recursive implementation with automatic supervised selection of relevant basis functions. Simulation results indicate that the behavior learned through our approach clearly outperforms the best results obtained with tilecoding by Stone et al. (2005).

Cite this Paper


BibTeX
@InProceedings{pmlr-v1-jung07a, title = {Learning RoboCup-Keepaway with Kernels}, author = {Jung, Tobias and Polani, Daniel}, booktitle = {Gaussian Processes in Practice}, pages = {33--57}, year = {2007}, editor = {Lawrence, Neil D. and Schwaighofer, Anton and Quiñonero Candela, Joaquin}, volume = {1}, series = {Proceedings of Machine Learning Research}, address = {Bletchley Park, UK}, month = {12--13 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v1/jung07a/jung07a.pdf}, url = {https://proceedings.mlr.press/v1/jung07a.html}, abstract = {We apply kernel-based methods to solve the difficult reinforcement learning problem of 3vs2 keepaway in RoboCup simulated soccer. Key challenges in keepaway are the highdimensionality of the state space (rendering conventional discretization-based function approximation like tilecoding infeasible), the stochasticity due to noise and multiple learning agents needing to cooperate (meaning that the exact dynamics of the environment are unknown) and real-time learning (meaning that an efficient online implementation is required). We employ the general framework of approximate policy iteration with least-squares-based policy evaluation. As underlying function approximator we consider the family of regularization networks with subset of regressors approximation. The core of our proposed solution is an efficient recursive implementation with automatic supervised selection of relevant basis functions. Simulation results indicate that the behavior learned through our approach clearly outperforms the best results obtained with tilecoding by Stone et al. (2005).} }
Endnote
%0 Conference Paper %T Learning RoboCup-Keepaway with Kernels %A Tobias Jung %A Daniel Polani %B Gaussian Processes in Practice %C Proceedings of Machine Learning Research %D 2007 %E Neil D. Lawrence %E Anton Schwaighofer %E Joaquin Quiñonero Candela %F pmlr-v1-jung07a %I PMLR %P 33--57 %U https://proceedings.mlr.press/v1/jung07a.html %V 1 %X We apply kernel-based methods to solve the difficult reinforcement learning problem of 3vs2 keepaway in RoboCup simulated soccer. Key challenges in keepaway are the highdimensionality of the state space (rendering conventional discretization-based function approximation like tilecoding infeasible), the stochasticity due to noise and multiple learning agents needing to cooperate (meaning that the exact dynamics of the environment are unknown) and real-time learning (meaning that an efficient online implementation is required). We employ the general framework of approximate policy iteration with least-squares-based policy evaluation. As underlying function approximator we consider the family of regularization networks with subset of regressors approximation. The core of our proposed solution is an efficient recursive implementation with automatic supervised selection of relevant basis functions. Simulation results indicate that the behavior learned through our approach clearly outperforms the best results obtained with tilecoding by Stone et al. (2005).
RIS
TY - CPAPER TI - Learning RoboCup-Keepaway with Kernels AU - Tobias Jung AU - Daniel Polani BT - Gaussian Processes in Practice DA - 2007/03/11 ED - Neil D. Lawrence ED - Anton Schwaighofer ED - Joaquin Quiñonero Candela ID - pmlr-v1-jung07a PB - PMLR DP - Proceedings of Machine Learning Research VL - 1 SP - 33 EP - 57 L1 - http://proceedings.mlr.press/v1/jung07a/jung07a.pdf UR - https://proceedings.mlr.press/v1/jung07a.html AB - We apply kernel-based methods to solve the difficult reinforcement learning problem of 3vs2 keepaway in RoboCup simulated soccer. Key challenges in keepaway are the highdimensionality of the state space (rendering conventional discretization-based function approximation like tilecoding infeasible), the stochasticity due to noise and multiple learning agents needing to cooperate (meaning that the exact dynamics of the environment are unknown) and real-time learning (meaning that an efficient online implementation is required). We employ the general framework of approximate policy iteration with least-squares-based policy evaluation. As underlying function approximator we consider the family of regularization networks with subset of regressors approximation. The core of our proposed solution is an efficient recursive implementation with automatic supervised selection of relevant basis functions. Simulation results indicate that the behavior learned through our approach clearly outperforms the best results obtained with tilecoding by Stone et al. (2005). ER -
APA
Jung, T. & Polani, D.. (2007). Learning RoboCup-Keepaway with Kernels. Gaussian Processes in Practice, in Proceedings of Machine Learning Research 1:33-57 Available from https://proceedings.mlr.press/v1/jung07a.html.

Related Material