[edit]
The Future of AI Research: Ten Defeasible ‘Axioms of Intelligence’
Proceedings of the Third International Workshop on Self-Supervised Learning, PMLR 192:5-21, 2022.
Abstract
What sets artificial intelligence (AI) apart from other fields of science and technology is not what it has achieved so far, but rather what it set out to do from the very beginning, namely, to create autonomous self-contained systems that can rival human cognition—machines with ‘human-level general intelligence.’ To achieve this aim calls for a new kind of system that, among other things, unifies – in a single architecture – the ability to represent causal relations, create and manage knowledge incrementally and autonomously, and generate its own meaning through empirical reasoning and control. We maintain that building such systems requires a shared methodological foundation, and calls for a stronger theoretical basis than simply the one inherited directly from computer science. This, in turn, calls for a greater emphasis on the theory of intelligence and methodological approaches for building such systems. We argue that necessary (but not necessarily sufficient) components for general intelligence must include the unification of causal relations, reasoning, and cognitive development. A constructivist stance, in our view, can serve as a good starting point for this purpose.