Seed-Programmed Autonomous General Learning

Kristinn R. Thórisson
Proceedings of the First International Workshop on Self-Supervised Learning, PMLR 131:32-61, 2020.

Abstract

The knowledge that a natural learner creates of any new situation will initially not only be partial but very likely be partially incorrect. To improve incomplete and incorrect knowl- edge with increased experience – accumulated evidence – learning processes must bring already-acquired knowledge towards making sense of new situations. For the initial creation of knowledge, and its subsequent usage, expansion, modification, unification, and deletion, knowledge construction mechanisms must be self-guided, capable of self-supervised “sur- gical” operation on existing knowledge, involving among other things self-inspection or reflection. Further, the information that makes up an agent’s knowledge set must thus be structured in a way that supports reflective processes including discrimination, compari- son, and manipulation of arbitrary subsets of the knowledge set. Few proposals for how to achieve this in a parsimonious way exist. Here we present a theory of how systems with these properties may work, and how cumulative self-supervised learning mechanisms can reach levels of autonomy like those seen in individuals of many animal species. Our theory rests on the hypotheses that learning is (a) organized around causal relations, (b) boot- strapped from observed correlations, using (c) fine-grain relational models, manipulated by (d) micro-ampliative reasoning processes. We further hypothesize that a machine properly constructed in this way will be (e) capable of seed-programmed autonomous generality: The ability to apply learning to any phenomenon – that is, being domain-independent – provided that (f) the seed reference observable variables at “birth”, and that (g) new phenomena and existing knowledge overlap on one or more observables or inferred features. The theory is based on implemented systems that have produced notable results in the direction of increased general machine intelligence.

Cite this Paper


BibTeX
@InProceedings{pmlr-v131-thorisson20a, title = {Seed-Programmed Autonomous General Learning}, author = {Th\'orisson, Kristinn R.}, booktitle = {Proceedings of the First International Workshop on Self-Supervised Learning}, pages = {32--61}, year = {2020}, editor = {Minsky, Henry and Robertson, Paul and Georgeon, Olivier L. and Minsky, Milan and Shaoul, Cyrus}, volume = {131}, series = {Proceedings of Machine Learning Research}, month = {27--28 Feb}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v131/thorisson20a/thorisson20a.pdf}, url = {https://proceedings.mlr.press/v131/thorisson20a.html}, abstract = {The knowledge that a natural learner creates of any new situation will initially not only be partial but very likely be partially incorrect. To improve incomplete and incorrect knowl- edge with increased experience – accumulated evidence – learning processes must bring already-acquired knowledge towards making sense of new situations. For the initial creation of knowledge, and its subsequent usage, expansion, modification, unification, and deletion, knowledge construction mechanisms must be self-guided, capable of self-supervised “sur- gical” operation on existing knowledge, involving among other things self-inspection or reflection. Further, the information that makes up an agent’s knowledge set must thus be structured in a way that supports reflective processes including discrimination, compari- son, and manipulation of arbitrary subsets of the knowledge set. Few proposals for how to achieve this in a parsimonious way exist. Here we present a theory of how systems with these properties may work, and how cumulative self-supervised learning mechanisms can reach levels of autonomy like those seen in individuals of many animal species. Our theory rests on the hypotheses that learning is (a) organized around causal relations, (b) boot- strapped from observed correlations, using (c) fine-grain relational models, manipulated by (d) micro-ampliative reasoning processes. We further hypothesize that a machine properly constructed in this way will be (e) capable of seed-programmed autonomous generality: The ability to apply learning to any phenomenon – that is, being domain-independent – provided that (f) the seed reference observable variables at “birth”, and that (g) new phenomena and existing knowledge overlap on one or more observables or inferred features. The theory is based on implemented systems that have produced notable results in the direction of increased general machine intelligence.} }
Endnote
%0 Conference Paper %T Seed-Programmed Autonomous General Learning %A Kristinn R. Thórisson %B Proceedings of the First International Workshop on Self-Supervised Learning %C Proceedings of Machine Learning Research %D 2020 %E Henry Minsky %E Paul Robertson %E Olivier L. Georgeon %E Milan Minsky %E Cyrus Shaoul %F pmlr-v131-thorisson20a %I PMLR %P 32--61 %U https://proceedings.mlr.press/v131/thorisson20a.html %V 131 %X The knowledge that a natural learner creates of any new situation will initially not only be partial but very likely be partially incorrect. To improve incomplete and incorrect knowl- edge with increased experience – accumulated evidence – learning processes must bring already-acquired knowledge towards making sense of new situations. For the initial creation of knowledge, and its subsequent usage, expansion, modification, unification, and deletion, knowledge construction mechanisms must be self-guided, capable of self-supervised “sur- gical” operation on existing knowledge, involving among other things self-inspection or reflection. Further, the information that makes up an agent’s knowledge set must thus be structured in a way that supports reflective processes including discrimination, compari- son, and manipulation of arbitrary subsets of the knowledge set. Few proposals for how to achieve this in a parsimonious way exist. Here we present a theory of how systems with these properties may work, and how cumulative self-supervised learning mechanisms can reach levels of autonomy like those seen in individuals of many animal species. Our theory rests on the hypotheses that learning is (a) organized around causal relations, (b) boot- strapped from observed correlations, using (c) fine-grain relational models, manipulated by (d) micro-ampliative reasoning processes. We further hypothesize that a machine properly constructed in this way will be (e) capable of seed-programmed autonomous generality: The ability to apply learning to any phenomenon – that is, being domain-independent – provided that (f) the seed reference observable variables at “birth”, and that (g) new phenomena and existing knowledge overlap on one or more observables or inferred features. The theory is based on implemented systems that have produced notable results in the direction of increased general machine intelligence.
APA
Thórisson, K.R.. (2020). Seed-Programmed Autonomous General Learning. Proceedings of the First International Workshop on Self-Supervised Learning, in Proceedings of Machine Learning Research 131:32-61 Available from https://proceedings.mlr.press/v131/thorisson20a.html.

Related Material