[edit]
Explicit General Analogy for Autonomous Transversal Learning
Proceedings of the Third International Workshop on Self-Supervised Learning, PMLR 192:48-62, 2022.
Abstract
Making analogies is a kind of reasoning where two or more things are compared, to highlight or uncover attributes of interest. Besides being useful for comparing what is known, analogy making can help a learning agent deal with tasks and environments not experienced before, where similarities and differences to known phenomena and their cause-effects relations can be a source for generating hypotheses about novel phenomena, which in turn can serve as a basis for exploration and experimentation. Artificial intelligence (AI) systems that can make use of explicit analogies are relatively rare, and those making general analogies are even rarer. This may be because most AI systems are targeted to well-known tasks, relying heavily on human programmers for knowledge creation, an approach that – besides being intractably slow, error-prone, and highly ineffective – precludes the use of analogies for enabling autonomous knowledge transfer between tasks, domains, and environments with common characteristics. The automation of explicit analogy making in the service of such knowledge transfer has, in our view, at least three prerequisites: (a) Compositional knowledge representation, (b) reasoning machinery, and (c) the ability of the agent to make experiments on its surroundings. For an agent’s intelligence to be general, the methods chosen for these must be domain-independent and available on-demand at the agent’s discretion. The agent would identify a target novelty, generate hypotheses about what the novelty is ‘like’ through analogies, generate a set of hypotheses with potential to disqualify these and select between competing hypotheses, and intervene on the environment through direct action to test them. Here we describe the design of an analogy mechanism that allows a learning agent with the above features to autonomously, using previously-learned causal knowledge, make analogies between a source and target task, hypothesize sets of new causal models for performing the new tasks, and to verify the validity of these through a set of autonomously generated actions. We describe how this general approach can be implemented in an existing cognitive system, the Autocatlytic Endogenous Reflective Architecture (AERA).