Learning RoboCup-Keepaway with Kernels

Tobias Jung, Daniel Polani
; Gaussian Processes in Practice, PMLR 1:33-57, 2007.

Abstract

We apply kernel-based methods to solve the difficult reinforcement learning problem of 3vs2 keepaway in RoboCup simulated soccer. Key challenges in keepaway are the highdimensionality of the state space (rendering conventional discretization-based function approximation like tilecoding infeasible), the stochasticity due to noise and multiple learning agents needing to cooperate (meaning that the exact dynamics of the environment are unknown) and real-time learning (meaning that an efficient online implementation is required). We employ the general framework of approximate policy iteration with least-squares-based policy evaluation. As underlying function approximator we consider the family of regularization networks with subset of regressors approximation. The core of our proposed solution is an efficient recursive implementation with automatic supervised selection of relevant basis functions. Simulation results indicate that the behavior learned through our approach clearly outperforms the best results obtained with tilecoding by Stone et al. (2005).

Cite this Paper


BibTeX
@InProceedings{pmlr-v1-jung07a, title = {Learning RoboCup-Keepaway with Kernels}, author = {Tobias Jung and Daniel Polani}, booktitle = {Gaussian Processes in Practice}, pages = {33--57}, year = {2007}, editor = {Neil D. Lawrence and Anton Schwaighofer and Joaquin Quiñonero Candela}, volume = {1}, series = {Proceedings of Machine Learning Research}, address = {Bletchley Park, UK}, month = {12--13 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v1/jung07a/jung07a.pdf}, url = {http://proceedings.mlr.press/v1/jung07a.html}, abstract = {We apply kernel-based methods to solve the difficult reinforcement learning problem of 3vs2 keepaway in RoboCup simulated soccer. Key challenges in keepaway are the highdimensionality of the state space (rendering conventional discretization-based function approximation like tilecoding infeasible), the stochasticity due to noise and multiple learning agents needing to cooperate (meaning that the exact dynamics of the environment are unknown) and real-time learning (meaning that an efficient online implementation is required). We employ the general framework of approximate policy iteration with least-squares-based policy evaluation. As underlying function approximator we consider the family of regularization networks with subset of regressors approximation. The core of our proposed solution is an efficient recursive implementation with automatic supervised selection of relevant basis functions. Simulation results indicate that the behavior learned through our approach clearly outperforms the best results obtained with tilecoding by Stone et al. (2005).} }
Endnote
%0 Conference Paper %T Learning RoboCup-Keepaway with Kernels %A Tobias Jung %A Daniel Polani %B Gaussian Processes in Practice %C Proceedings of Machine Learning Research %D 2007 %E Neil D. Lawrence %E Anton Schwaighofer %E Joaquin Quiñonero Candela %F pmlr-v1-jung07a %I PMLR %J Proceedings of Machine Learning Research %P 33--57 %U http://proceedings.mlr.press %V 1 %W PMLR %X We apply kernel-based methods to solve the difficult reinforcement learning problem of 3vs2 keepaway in RoboCup simulated soccer. Key challenges in keepaway are the highdimensionality of the state space (rendering conventional discretization-based function approximation like tilecoding infeasible), the stochasticity due to noise and multiple learning agents needing to cooperate (meaning that the exact dynamics of the environment are unknown) and real-time learning (meaning that an efficient online implementation is required). We employ the general framework of approximate policy iteration with least-squares-based policy evaluation. As underlying function approximator we consider the family of regularization networks with subset of regressors approximation. The core of our proposed solution is an efficient recursive implementation with automatic supervised selection of relevant basis functions. Simulation results indicate that the behavior learned through our approach clearly outperforms the best results obtained with tilecoding by Stone et al. (2005).
RIS
TY - CPAPER TI - Learning RoboCup-Keepaway with Kernels AU - Tobias Jung AU - Daniel Polani BT - Gaussian Processes in Practice PY - 2007/03/11 DA - 2007/03/11 ED - Neil D. Lawrence ED - Anton Schwaighofer ED - Joaquin Quiñonero Candela ID - pmlr-v1-jung07a PB - PMLR SP - 33 DP - PMLR EP - 57 L1 - http://proceedings.mlr.press/v1/jung07a/jung07a.pdf UR - http://proceedings.mlr.press/v1/jung07a.html AB - We apply kernel-based methods to solve the difficult reinforcement learning problem of 3vs2 keepaway in RoboCup simulated soccer. Key challenges in keepaway are the highdimensionality of the state space (rendering conventional discretization-based function approximation like tilecoding infeasible), the stochasticity due to noise and multiple learning agents needing to cooperate (meaning that the exact dynamics of the environment are unknown) and real-time learning (meaning that an efficient online implementation is required). We employ the general framework of approximate policy iteration with least-squares-based policy evaluation. As underlying function approximator we consider the family of regularization networks with subset of regressors approximation. The core of our proposed solution is an efficient recursive implementation with automatic supervised selection of relevant basis functions. Simulation results indicate that the behavior learned through our approach clearly outperforms the best results obtained with tilecoding by Stone et al. (2005). ER -
APA
Jung, T. & Polani, D.. (2007). Learning RoboCup-Keepaway with Kernels. Gaussian Processes in Practice, in PMLR 1:33-57

Related Material