On the Help of Bounded Shot Verifiers, Comparators and Standardisers
for Learnability in Inductive Inference

Ziyuan Gao, Sanjay Jain, Frank Stephan, Thomas Zeugmann

Proceedings of Algorithmic Learning Theory, PMLR 83:413-437, 2018.

Abstract

The present paper deals with the inductive inference of recursively enumerable
languages from positive data (also called text).
It introduces the learning models of \emph{verifiability} and
\emph{comparability}. The input to a verifier is an index $e$ and a text of
the target language $L$, and the learner has to \emph{verify} whether or not
the index $e$ input is correct for the target language $L$. A comparator
receives two indices of languages from the target class $\cL$ as input and
has to decide in the limit whether or not these indices generate the same
language. Furthermore, \emph{standardisability} is studied, where a
\emph{standardiser} receives an index $j$ of some target language $L$ from
the class $\cL$, and for every $L∈\cL$ there must be an index $e$ such
that $e$ generates $L$ and the standardiser has to map every index $j$
for $L$ to $e$.
Additionally, the common learning models of \emph{explanatory learning},
\emph{conservative explanatory learning}, and \emph{behaviourally correct learning}
are considered. For almost all learning models mentioned above it is also
appropriate to consider the number of times a learner changes its mind.
In particular, if no mind change occurs then we obtain the \emph{finite} variant
of the models considered. Occasionally, also learning with the help of an oracle
is taken into consideration.
The main goal of this paper is to figure out to what extent
verifiability, comparability, and standardisability are helpful for the
inductive inference of classes of recursively enumerable languages.
Here we also distinguish between \emph{indexed families}, \emph{one-one enumerable
classes}, and \emph{recursively enumerable classes}.
Our results are manyfold, and an almost complete picture is obtained. In particular,
for indexed families and recursively enumerable classes finite comparability,
finite standardisability, and finite verifiability always imply finite
learnability. If at least one mind change is allowed, then there are
differences,
i.e., for indexed families, comparability or verifiability imply conservative explanatory learning, but
standardisability does not; still explanatory learning can be achieved.

Cite this Paper

BibTeX

@InProceedings{pmlr-v83-gao18a,
title = {On the Help of Bounded Shot Verifiers, Comparators and Standardisers
for Learnability in Inductive Inference},
author = {Gao, Ziyuan and Jain, Sanjay and Stephan, Frank and Zeugmann, Thomas},
booktitle = {Proceedings of Algorithmic Learning Theory},
pages = {413--437},
year = {2018},
editor = {Janoos, Firdaus and Mohri, Mehryar and Sridharan, Karthik},
volume = {83},
series = {Proceedings of Machine Learning Research},
month = {07--09 Apr},
publisher = {PMLR},
pdf = {http://proceedings.mlr.press/v83/gao18a/gao18a.pdf},
url = {https://proceedings.mlr.press/v83/gao18a.html},
abstract = {The present paper deals with the inductive inference of recursively enumerable
languages from positive data (also called text).
It introduces the learning models of \emph{verifiability} and
\emph{comparability}. The input to a verifier is an index $e$ and a text of
the target language $L$, and the learner has to \emph{verify} whether or not
the index $e$ input is correct for the target language $L$. A comparator
receives two indices of languages from the target class $\cL$ as input and
has to decide in the limit whether or not these indices generate the same
language. Furthermore, \emph{standardisability} is studied, where a
\emph{standardiser} receives an index $j$ of some target language $L$ from
the class $\cL$, and for every $L∈\cL$ there must be an index $e$ such
that $e$ generates $L$ and the standardiser has to map every index $j$
for $L$ to $e$.
Additionally, the common learning models of \emph{explanatory learning},
\emph{conservative explanatory learning}, and \emph{behaviourally correct learning}
are considered. For almost all learning models mentioned above it is also
appropriate to consider the number of times a learner changes its mind.
In particular, if no mind change occurs then we obtain the \emph{finite} variant
of the models considered. Occasionally, also learning with the help of an oracle
is taken into consideration.
The main goal of this paper is to figure out to what extent
verifiability, comparability, and standardisability are helpful for the
inductive inference of classes of recursively enumerable languages.
Here we also distinguish between \emph{indexed families}, \emph{one-one enumerable
classes}, and \emph{recursively enumerable classes}.
Our results are manyfold, and an almost complete picture is obtained. In particular,
for indexed families and recursively enumerable classes finite comparability,
finite standardisability, and finite verifiability always imply finite
learnability. If at least one mind change is allowed, then there are
differences,
i.e., for indexed families, comparability or verifiability imply conservative explanatory learning, but
standardisability does not; still explanatory learning can be achieved.}
}

Endnote

%0 Conference Paper
%T On the Help of Bounded Shot Verifiers, Comparators and Standardisers
for Learnability in Inductive Inference
%A Ziyuan Gao
%A Sanjay Jain
%A Frank Stephan
%A Thomas Zeugmann
%B Proceedings of Algorithmic Learning Theory
%C Proceedings of Machine Learning Research
%D 2018
%E Firdaus Janoos
%E Mehryar Mohri
%E Karthik Sridharan
%F pmlr-v83-gao18a
%I PMLR
%P 413--437
%U https://proceedings.mlr.press/v83/gao18a.html
%V 83
%X The present paper deals with the inductive inference of recursively enumerable
languages from positive data (also called text).
It introduces the learning models of \emph{verifiability} and
\emph{comparability}. The input to a verifier is an index $e$ and a text of
the target language $L$, and the learner has to \emph{verify} whether or not
the index $e$ input is correct for the target language $L$. A comparator
receives two indices of languages from the target class $\cL$ as input and
has to decide in the limit whether or not these indices generate the same
language. Furthermore, \emph{standardisability} is studied, where a
\emph{standardiser} receives an index $j$ of some target language $L$ from
the class $\cL$, and for every $L∈\cL$ there must be an index $e$ such
that $e$ generates $L$ and the standardiser has to map every index $j$
for $L$ to $e$.
Additionally, the common learning models of \emph{explanatory learning},
\emph{conservative explanatory learning}, and \emph{behaviourally correct learning}
are considered. For almost all learning models mentioned above it is also
appropriate to consider the number of times a learner changes its mind.
In particular, if no mind change occurs then we obtain the \emph{finite} variant
of the models considered. Occasionally, also learning with the help of an oracle
is taken into consideration.
The main goal of this paper is to figure out to what extent
verifiability, comparability, and standardisability are helpful for the
inductive inference of classes of recursively enumerable languages.
Here we also distinguish between \emph{indexed families}, \emph{one-one enumerable
classes}, and \emph{recursively enumerable classes}.
Our results are manyfold, and an almost complete picture is obtained. In particular,
for indexed families and recursively enumerable classes finite comparability,
finite standardisability, and finite verifiability always imply finite
learnability. If at least one mind change is allowed, then there are
differences,
i.e., for indexed families, comparability or verifiability imply conservative explanatory learning, but
standardisability does not; still explanatory learning can be achieved.

APA

Gao, Z., Jain, S., Stephan, F. & Zeugmann, T.. (2018). On the Help of Bounded Shot Verifiers, Comparators and Standardisers
for Learnability in Inductive Inference. Proceedings of Algorithmic Learning Theory, in Proceedings of Machine Learning Research 83:413-437 Available from https://proceedings.mlr.press/v83/gao18a.html.