A Unified Model of Reasoning and Learning

Pei Wang
Proceedings of the Second International Workshop on Self-Supervised Learning, PMLR 159:28-48, 2022.

Abstract

We present a novel approach to state space discretization for constructivist and reinforcement learning. Constructivist learning and reinforcement learning often operate on a predefined set of states and transitions (state space). AI researchers design algorithms to reach particular goal states in this state space (for example, visualized in the form of goal cells that a robot should reach in a grid). When the size and the dimensionality of the state space increases, however, finding goal states becomes intractable. It is nonetheless assumed that these algorithms can have useful applications in the physical world provided that there is a way to construct a discrete state space of reasonable size and dimensionality. Yet, the manner in which the state space is discretized is the source of many problems for both constructivist and reinforcement learning approaches. The problems can roughly be divided into two categories: (1) wiring too much domain information into the solution, and (2) requiring massive storage to represent the state space (such as Q-tables. The problems relate to (1) the non generality arising from wiring domain information into the solution, and (2) non scalability of the approach to useful domains involving high dimensional state spaces. Another important limitation is that high dimensional state spaces require a massive number of learning trials. We present a new approach that builds upon ideas from place cells and cognitive maps.

Cite this Paper


BibTeX
@InProceedings{pmlr-v159-wang22a, title = {A Unified Model of Reasoning and Learning}, author = {Wang, Pei}, booktitle = {Proceedings of the Second International Workshop on Self-Supervised Learning}, pages = {28--48}, year = {2022}, editor = {Thórisson, Kristinn R. and Robertson, Paul}, volume = {159}, series = {Proceedings of Machine Learning Research}, month = {13--14 Aug}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v159/wang22a/wang22a.pdf}, url = {https://proceedings.mlr.press/v159/wang22a.html}, abstract = {We present a novel approach to state space discretization for constructivist and reinforcement learning. Constructivist learning and reinforcement learning often operate on a predefined set of states and transitions (state space). AI researchers design algorithms to reach particular goal states in this state space (for example, visualized in the form of goal cells that a robot should reach in a grid). When the size and the dimensionality of the state space increases, however, finding goal states becomes intractable. It is nonetheless assumed that these algorithms can have useful applications in the physical world provided that there is a way to construct a discrete state space of reasonable size and dimensionality. Yet, the manner in which the state space is discretized is the source of many problems for both constructivist and reinforcement learning approaches. The problems can roughly be divided into two categories: (1) wiring too much domain information into the solution, and (2) requiring massive storage to represent the state space (such as Q-tables. The problems relate to (1) the non generality arising from wiring domain information into the solution, and (2) non scalability of the approach to useful domains involving high dimensional state spaces. Another important limitation is that high dimensional state spaces require a massive number of learning trials. We present a new approach that builds upon ideas from place cells and cognitive maps.} }
Endnote
%0 Conference Paper %T A Unified Model of Reasoning and Learning %A Pei Wang %B Proceedings of the Second International Workshop on Self-Supervised Learning %C Proceedings of Machine Learning Research %D 2022 %E Kristinn R. Thórisson %E Paul Robertson %F pmlr-v159-wang22a %I PMLR %P 28--48 %U https://proceedings.mlr.press/v159/wang22a.html %V 159 %X We present a novel approach to state space discretization for constructivist and reinforcement learning. Constructivist learning and reinforcement learning often operate on a predefined set of states and transitions (state space). AI researchers design algorithms to reach particular goal states in this state space (for example, visualized in the form of goal cells that a robot should reach in a grid). When the size and the dimensionality of the state space increases, however, finding goal states becomes intractable. It is nonetheless assumed that these algorithms can have useful applications in the physical world provided that there is a way to construct a discrete state space of reasonable size and dimensionality. Yet, the manner in which the state space is discretized is the source of many problems for both constructivist and reinforcement learning approaches. The problems can roughly be divided into two categories: (1) wiring too much domain information into the solution, and (2) requiring massive storage to represent the state space (such as Q-tables. The problems relate to (1) the non generality arising from wiring domain information into the solution, and (2) non scalability of the approach to useful domains involving high dimensional state spaces. Another important limitation is that high dimensional state spaces require a massive number of learning trials. We present a new approach that builds upon ideas from place cells and cognitive maps.
APA
Wang, P.. (2022). A Unified Model of Reasoning and Learning. Proceedings of the Second International Workshop on Self-Supervised Learning, in Proceedings of Machine Learning Research 159:28-48 Available from https://proceedings.mlr.press/v159/wang22a.html.

Related Material