Proceedings of the 28th International Conference on Algorithmic Learning Theory, PMLR 76:434-451, 2017.
Abstract
We study the classification problems over string data for hypotheses specified by formulas of monadic second-order logic MSO. The goal is to design learning algorithms that run in time polynomial in the size of the training set, independently of or at least sublinear in the size of the whole data set. We prove negative as well as positive results. If the data set is an unprocessed string to which our algorithms have local access, then learning in sublinear time is impossible even for hypotheses definable in a small fragment of first-order logic. If we allow for a linear time pre-processing of the string data to build an index data structure, then learning of MSO-definable hypotheses is possible in time polynomial in the size of the training set, independently of the size of the whole data set.
@InProceedings{pmlr-v76-grohe17a,
title = {Learning MSO-definable hypotheses on strings},
author = {Martin Grohe and Christof Löding and Martin Ritzert},
booktitle = {Proceedings of the 28th International Conference on Algorithmic Learning Theory},
pages = {434--451},
year = {2017},
editor = {Steve Hanneke and Lev Reyzin},
volume = {76},
series = {Proceedings of Machine Learning Research},
address = {Kyoto University, Kyoto, Japan},
month = {15--17 Oct},
publisher = {PMLR},
pdf = {http://proceedings.mlr.press/v76/grohe17a/grohe17a.pdf},
url = {http://proceedings.mlr.press/v76/grohe17a.html},
abstract = {We study the classification problems over string data for hypotheses specified by formulas of monadic second-order logic MSO. The goal is to design learning algorithms that run in time polynomial in the size of the training set, independently of or at least sublinear in the size of the whole data set. We prove negative as well as positive results. If the data set is an unprocessed string to which our algorithms have local access, then learning in sublinear time is impossible even for hypotheses definable in a small fragment of first-order logic. If we allow for a linear time pre-processing of the string data to build an index data structure, then learning of MSO-definable hypotheses is possible in time polynomial in the size of the training set, independently of the size of the whole data set.}
}
%0 Conference Paper
%T Learning MSO-definable hypotheses on strings
%A Martin Grohe
%A Christof Löding
%A Martin Ritzert
%B Proceedings of the 28th International Conference on Algorithmic Learning Theory
%C Proceedings of Machine Learning Research
%D 2017
%E Steve Hanneke
%E Lev Reyzin
%F pmlr-v76-grohe17a
%I PMLR
%J Proceedings of Machine Learning Research
%P 434--451
%U http://proceedings.mlr.press
%V 76
%W PMLR
%X We study the classification problems over string data for hypotheses specified by formulas of monadic second-order logic MSO. The goal is to design learning algorithms that run in time polynomial in the size of the training set, independently of or at least sublinear in the size of the whole data set. We prove negative as well as positive results. If the data set is an unprocessed string to which our algorithms have local access, then learning in sublinear time is impossible even for hypotheses definable in a small fragment of first-order logic. If we allow for a linear time pre-processing of the string data to build an index data structure, then learning of MSO-definable hypotheses is possible in time polynomial in the size of the training set, independently of the size of the whole data set.
Grohe, M., Löding, C. & Ritzert, M.. (2017). Learning MSO-definable hypotheses on strings. Proceedings of the 28th International Conference on Algorithmic Learning Theory, in PMLR 76:434-451
This site last compiled Mon, 09 Apr 2018 09:37:13 +0000