Normal Forms in Semantic Language Identification

Timo Kötzing, Martin Schirneck, Karen Seidel
Proceedings of the 28th International Conference on Algorithmic Learning Theory, PMLR 76:493-516, 2017.

Abstract

We consider language learning in the limit from text where all learning restrictions are semantic, that is, where any conjecture may be replaced by a semantically equivalent conjecture. For different such learning criteria, starting with the well-known $\mathbf{Txt}\mathbf{G}\mathbf{Bc}$-learning, we consider three different normal forms: strongly locking learning, consistent learning and (partially) set-driven learning. These normal forms support and simplify proofs and give insight into what behaviors are necessary for successful learning (for example when consistency in conservative learning implies cautiousness and strong decisiveness).

We show that strongly locking learning can be assumed for partially set-driven learners, even when learning restrictions apply. We give a very general proof relying only on a natural property of the learning restriction, namely, allowing for simulation on equivalent text. Furthermore, when no restrictions apply, also the converse is true: every strongly locking learner can be made partially set-driven. For several semantic learning criteria we show that learning can be done consistently. Finally, we deduce for which learning restrictions partial set-drivenness and set-drivenness coincide, including a general statement about classes of infinite languages. The latter again relies on a simulation argument.

Cite this Paper


BibTeX
@InProceedings{pmlr-v76-kötzing17a, title = {Normal Forms in Semantic Language Identification}, author = {Kötzing, Timo and Schirneck, Martin and Seidel, Karen}, booktitle = {Proceedings of the 28th International Conference on Algorithmic Learning Theory}, pages = {493--516}, year = {2017}, editor = {Hanneke, Steve and Reyzin, Lev}, volume = {76}, series = {Proceedings of Machine Learning Research}, month = {15--17 Oct}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v76/kötzing17a/kötzing17a.pdf}, url = {https://proceedings.mlr.press/v76/k%C3%B6tzing17a.html}, abstract = {We consider language learning in the limit from text where all learning restrictions are semantic, that is, where any conjecture may be replaced by a semantically equivalent conjecture. For different such learning criteria, starting with the well-known $\mathbf{Txt}\mathbf{G}\mathbf{Bc}$-learning, we consider three different normal forms: strongly locking learning, consistent learning and (partially) set-driven learning. These normal forms support and simplify proofs and give insight into what behaviors are necessary for successful learning (for example when consistency in conservative learning implies cautiousness and strong decisiveness).

We show that strongly locking learning can be assumed for partially set-driven learners, even when learning restrictions apply. We give a very general proof relying only on a natural property of the learning restriction, namely, allowing for simulation on equivalent text. Furthermore, when no restrictions apply, also the converse is true: every strongly locking learner can be made partially set-driven. For several semantic learning criteria we show that learning can be done consistently. Finally, we deduce for which learning restrictions partial set-drivenness and set-drivenness coincide, including a general statement about classes of infinite languages. The latter again relies on a simulation argument.} }
Endnote
%0 Conference Paper %T Normal Forms in Semantic Language Identification %A Timo Kötzing %A Martin Schirneck %A Karen Seidel %B Proceedings of the 28th International Conference on Algorithmic Learning Theory %C Proceedings of Machine Learning Research %D 2017 %E Steve Hanneke %E Lev Reyzin %F pmlr-v76-kötzing17a %I PMLR %P 493--516 %U https://proceedings.mlr.press/v76/k%C3%B6tzing17a.html %V 76 %X We consider language learning in the limit from text where all learning restrictions are semantic, that is, where any conjecture may be replaced by a semantically equivalent conjecture. For different such learning criteria, starting with the well-known $\mathbf{Txt}\mathbf{G}\mathbf{Bc}$-learning, we consider three different normal forms: strongly locking learning, consistent learning and (partially) set-driven learning. These normal forms support and simplify proofs and give insight into what behaviors are necessary for successful learning (for example when consistency in conservative learning implies cautiousness and strong decisiveness).

We show that strongly locking learning can be assumed for partially set-driven learners, even when learning restrictions apply. We give a very general proof relying only on a natural property of the learning restriction, namely, allowing for simulation on equivalent text. Furthermore, when no restrictions apply, also the converse is true: every strongly locking learner can be made partially set-driven. For several semantic learning criteria we show that learning can be done consistently. Finally, we deduce for which learning restrictions partial set-drivenness and set-drivenness coincide, including a general statement about classes of infinite languages. The latter again relies on a simulation argument.
APA
Kötzing, T., Schirneck, M. & Seidel, K.. (2017). Normal Forms in Semantic Language Identification. Proceedings of the 28th International Conference on Algorithmic Learning Theory, in Proceedings of Machine Learning Research 76:493-516 Available from https://proceedings.mlr.press/v76/k%C3%B6tzing17a.html.

Related Material