Clearing Restarting Automata and Grammatical Inference

Peter Černo
Proceedings of the Eleventh International Conference on Grammatical Inference, PMLR 21:54-68, 2012.

Abstract

Clearing and subword-clearing restarting automata are linguistically motivated models of automata. We investigate the problem of grammatical inference for such automata based on the given set of positive and negative samples. We show that it is possible to identify these models in the limit. In this way we can learn a large class of languages. On the other hand, we prove that the task of finding a clearing restarting automaton consistent with a given set of positive and negative samples is NP-hard, provided that we impose an upper bound on the width of its instructions.

Cite this Paper


BibTeX
@InProceedings{pmlr-v21-cerno12a, title = {Clearing Restarting Automata and Grammatical Inference}, author = {Černo, Peter}, booktitle = {Proceedings of the Eleventh International Conference on Grammatical Inference}, pages = {54--68}, year = {2012}, editor = {Heinz, Jeffrey and Higuera, Colin and Oates, Tim}, volume = {21}, series = {Proceedings of Machine Learning Research}, address = {University of Maryland, College Park, MD, USA}, month = {05--08 Sep}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v21/cerno12a/cerno12a.pdf}, url = {https://proceedings.mlr.press/v21/cerno12a.html}, abstract = {Clearing and subword-clearing restarting automata are linguistically motivated models of automata. We investigate the problem of grammatical inference for such automata based on the given set of positive and negative samples. We show that it is possible to identify these models in the limit. In this way we can learn a large class of languages. On the other hand, we prove that the task of finding a clearing restarting automaton consistent with a given set of positive and negative samples is NP-hard, provided that we impose an upper bound on the width of its instructions.} }
Endnote
%0 Conference Paper %T Clearing Restarting Automata and Grammatical Inference %A Peter Černo %B Proceedings of the Eleventh International Conference on Grammatical Inference %C Proceedings of Machine Learning Research %D 2012 %E Jeffrey Heinz %E Colin Higuera %E Tim Oates %F pmlr-v21-cerno12a %I PMLR %P 54--68 %U https://proceedings.mlr.press/v21/cerno12a.html %V 21 %X Clearing and subword-clearing restarting automata are linguistically motivated models of automata. We investigate the problem of grammatical inference for such automata based on the given set of positive and negative samples. We show that it is possible to identify these models in the limit. In this way we can learn a large class of languages. On the other hand, we prove that the task of finding a clearing restarting automaton consistent with a given set of positive and negative samples is NP-hard, provided that we impose an upper bound on the width of its instructions.
RIS
TY - CPAPER TI - Clearing Restarting Automata and Grammatical Inference AU - Peter Černo BT - Proceedings of the Eleventh International Conference on Grammatical Inference DA - 2012/08/16 ED - Jeffrey Heinz ED - Colin Higuera ED - Tim Oates ID - pmlr-v21-cerno12a PB - PMLR DP - Proceedings of Machine Learning Research VL - 21 SP - 54 EP - 68 L1 - http://proceedings.mlr.press/v21/cerno12a/cerno12a.pdf UR - https://proceedings.mlr.press/v21/cerno12a.html AB - Clearing and subword-clearing restarting automata are linguistically motivated models of automata. We investigate the problem of grammatical inference for such automata based on the given set of positive and negative samples. We show that it is possible to identify these models in the limit. In this way we can learn a large class of languages. On the other hand, we prove that the task of finding a clearing restarting automaton consistent with a given set of positive and negative samples is NP-hard, provided that we impose an upper bound on the width of its instructions. ER -
APA
Černo, P.. (2012). Clearing Restarting Automata and Grammatical Inference. Proceedings of the Eleventh International Conference on Grammatical Inference, in Proceedings of Machine Learning Research 21:54-68 Available from https://proceedings.mlr.press/v21/cerno12a.html.

Related Material