Continuous Learning of Action and State Spaces (CLASS)

Paul Robertson, Olivier Georgeon
Proceedings of the First International Workshop on Self-Supervised Learning, PMLR 131:15-31, 2020.

Abstract

We present a novel approach to state space discretization for constructivist and reinforcement learning. Constructivist learning and reinforcement learning often operate on a predefined set of states and transitions (state space). AI researchers design algorithms to reach particular goal states in this state space (for example, visualized in the form of goal cells that a robot should reach in a grid). When the size and the dimensionality of the state space increases, however, finding goal states becomes intractable. It is nonetheless assumed that these algorithms can have useful applications in the physical world provided that there is a way to construct a discrete state space of reasonable size and dimensionality. Yet, the manner in which the state space is discretized is the source of many problems for both constructivist and reinforcement learning approaches. The problems can roughly be divided into two categories: (1) wiring too much domain information into the solution, and (2) requiring massive storage to represent the state space (such as Q-tables. The problems relate to (1) the non generality arising from wiring domain information into the solution, and (2) non scalability of the approach to useful domains involving high dimensional state spaces. Another important limitation is that high dimensional state spaces require a massive number of learning trials. We present a new approach that builds upon ideas from place cells and cognitive maps.

Cite this Paper


BibTeX
@InProceedings{pmlr-v131-robertson20b, title = {Continuous Learning of Action and State Spaces (CLASS)}, author = {Robertson, Paul and Georgeon, Olivier}, booktitle = {Proceedings of the First International Workshop on Self-Supervised Learning}, pages = {15--31}, year = {2020}, editor = {Minsky, Henry and Robertson, Paul and Georgeon, Olivier L. and Minsky, Milan and Shaoul, Cyrus}, volume = {131}, series = {Proceedings of Machine Learning Research}, month = {27--28 Feb}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v131/robertson20b/robertson20b.pdf}, url = {https://proceedings.mlr.press/v131/robertson20b.html}, abstract = {We present a novel approach to state space discretization for constructivist and reinforcement learning. Constructivist learning and reinforcement learning often operate on a predefined set of states and transitions (state space). AI researchers design algorithms to reach particular goal states in this state space (for example, visualized in the form of goal cells that a robot should reach in a grid). When the size and the dimensionality of the state space increases, however, finding goal states becomes intractable. It is nonetheless assumed that these algorithms can have useful applications in the physical world provided that there is a way to construct a discrete state space of reasonable size and dimensionality. Yet, the manner in which the state space is discretized is the source of many problems for both constructivist and reinforcement learning approaches. The problems can roughly be divided into two categories: (1) wiring too much domain information into the solution, and (2) requiring massive storage to represent the state space (such as Q-tables. The problems relate to (1) the non generality arising from wiring domain information into the solution, and (2) non scalability of the approach to useful domains involving high dimensional state spaces. Another important limitation is that high dimensional state spaces require a massive number of learning trials. We present a new approach that builds upon ideas from place cells and cognitive maps.} }
Endnote
%0 Conference Paper %T Continuous Learning of Action and State Spaces (CLASS) %A Paul Robertson %A Olivier Georgeon %B Proceedings of the First International Workshop on Self-Supervised Learning %C Proceedings of Machine Learning Research %D 2020 %E Henry Minsky %E Paul Robertson %E Olivier L. Georgeon %E Milan Minsky %E Cyrus Shaoul %F pmlr-v131-robertson20b %I PMLR %P 15--31 %U https://proceedings.mlr.press/v131/robertson20b.html %V 131 %X We present a novel approach to state space discretization for constructivist and reinforcement learning. Constructivist learning and reinforcement learning often operate on a predefined set of states and transitions (state space). AI researchers design algorithms to reach particular goal states in this state space (for example, visualized in the form of goal cells that a robot should reach in a grid). When the size and the dimensionality of the state space increases, however, finding goal states becomes intractable. It is nonetheless assumed that these algorithms can have useful applications in the physical world provided that there is a way to construct a discrete state space of reasonable size and dimensionality. Yet, the manner in which the state space is discretized is the source of many problems for both constructivist and reinforcement learning approaches. The problems can roughly be divided into two categories: (1) wiring too much domain information into the solution, and (2) requiring massive storage to represent the state space (such as Q-tables. The problems relate to (1) the non generality arising from wiring domain information into the solution, and (2) non scalability of the approach to useful domains involving high dimensional state spaces. Another important limitation is that high dimensional state spaces require a massive number of learning trials. We present a new approach that builds upon ideas from place cells and cognitive maps.
APA
Robertson, P. & Georgeon, O.. (2020). Continuous Learning of Action and State Spaces (CLASS). Proceedings of the First International Workshop on Self-Supervised Learning, in Proceedings of Machine Learning Research 131:15-31 Available from https://proceedings.mlr.press/v131/robertson20b.html.

Related Material