[edit]
On the Help of Bounded Shot Verifiers, Comparators and Standardisers for Learnability in Inductive Inference
Proceedings of Algorithmic Learning Theory, PMLR 83:413-437, 2018.
Abstract
The present paper deals with the inductive inference of recursively enumerable
languages from positive data (also called text).
It introduces the learning models of \emph{verifiability} and
\emph{comparability}. The input to a verifier is an index $e$ and a text of
the target language $L$, and the learner has to \emph{verify} whether or not
the index $e$ input is correct for the target language $L$. A comparator
receives two indices of languages from the target class $\cL$ as input and
has to decide in the limit whether or not these indices generate the same
language. Furthermore, \emph{standardisability} is studied, where a
\emph{standardiser} receives an index $j$ of some target language $L$ from
the class $\cL$, and for every $L∈\cL$ there must be an index $e$ such
that $e$ generates $L$ and the standardiser has to map every index $j$
for $L$ to $e$.
Additionally, the common learning models of \emph{explanatory learning},
\emph{conservative explanatory learning}, and \emph{behaviourally correct learning}
are considered. For almost all learning models mentioned above it is also
appropriate to consider the number of times a learner changes its mind.
In particular, if no mind change occurs then we obtain the \emph{finite} variant
of the models considered. Occasionally, also learning with the help of an oracle
is taken into consideration.
The main goal of this paper is to figure out to what extent
verifiability, comparability, and standardisability are helpful for the
inductive inference of classes of recursively enumerable languages.
Here we also distinguish between \emph{indexed families}, \emph{one-one enumerable
classes}, and \emph{recursively enumerable classes}.
Our results are manyfold, and an almost complete picture is obtained. In particular,
for indexed families and recursively enumerable classes finite comparability,
finite standardisability, and finite verifiability always imply finite
learnability. If at least one mind change is allowed, then there are
differences,
i.e., for indexed families, comparability or verifiability imply conservative explanatory learning, but
standardisability does not; still explanatory learning can be achieved.