Ehsan Emamjomeh-Zadeh, David Kempe, Mohammad Mahdian, Robert E. Schapire

Proceedings of the 31st International Conference on Algorithmic Learning Theory, PMLR 117:277-296, 2020.

Abstract

We propose a general framework for interactively learning combinatorial structures, such as (binary or non-binary) classifiers, orderings/rankings of items, or clusterings, when the underlying structure changes over time. Inspired by Angluin’s equivalence query model, the algorithm proposes a structure in each round, and it either learns that its proposal is the true structure in this round, or it observes a specific mistake in the proposal. The feedback is correct only with probability $1 - p$, and adversarially incorrect with probability $p$. The algorithm’s goal is to minimize its number of mistakes over the course of $R$ rounds. Our general framework is based on a graph representation of the structures and feedback in a static environment, proposed by Emamjomeh-Zadeh and Kempe (2017). To be able to learn efficiently, it is sufficient that there be a graph $G$ whose nodes are the candidate structures and whose (weighted) edges capture the possible feedback, satisfying a certain natural shortest paths property. To model the evolution of the underlying structure, we consider two natural models, which we term the Shifting Target model and Drifting Target model. In the former, the true structure always belongs to a small pool of candidate structures. In the latter, the structure can change only by transitioning along the edges of a known evolution graph. In order to achieve non-trivial results, we bound the total number of times the underlying structure can change, denoted by $B$. We provide upper and lower bounds on the number of mistakes, which depend on the total number of changes $B$, the total number of structures $n$, and natural measures of complexity of the dynamic models: the size of the pool of candidate structures in the Shifting Target model, and the maximum degree of the evolution graph in the Drifting Target model.

Cite this Paper

BibTeX

@InProceedings{pmlr-v117-emamjomeh-zadeh20a,
title = {Interactive Learning of a Dynamic Structure},
author = {Emamjomeh-Zadeh, Ehsan and Kempe, David and Mahdian, Mohammad and Schapire, Robert E.},
booktitle = {Proceedings of the 31st International Conference on Algorithmic Learning Theory},
pages = {277--296},
year = {2020},
editor = {Kontorovich, Aryeh and Neu, Gergely},
volume = {117},
series = {Proceedings of Machine Learning Research},
month = {08 Feb--11 Feb},
publisher = {PMLR},
pdf = {http://proceedings.mlr.press/v117/emamjomeh-zadeh20a/emamjomeh-zadeh20a.pdf},
url = {https://proceedings.mlr.press/v117/emamjomeh-zadeh20a.html},
abstract = {We propose a general framework for interactively learning combinatorial structures, such as (binary or non-binary) classifiers, orderings/rankings of items, or clusterings, when the underlying structure changes over time. Inspired by Angluin’s equivalence query model, the algorithm proposes a structure in each round, and it either learns that its proposal is the true structure in this round, or it observes a specific mistake in the proposal. The feedback is correct only with probability $1 - p$, and adversarially incorrect with probability $p$. The algorithm’s goal is to minimize its number of mistakes over the course of $R$ rounds. Our general framework is based on a graph representation of the structures and feedback in a static environment, proposed by Emamjomeh-Zadeh and Kempe (2017). To be able to learn efficiently, it is sufficient that there be a graph $G$ whose nodes are the candidate structures and whose (weighted) edges capture the possible feedback, satisfying a certain natural shortest paths property. To model the evolution of the underlying structure, we consider two natural models, which we term the Shifting Target model and Drifting Target model. In the former, the true structure always belongs to a small pool of candidate structures. In the latter, the structure can change only by transitioning along the edges of a known evolution graph. In order to achieve non-trivial results, we bound the total number of times the underlying structure can change, denoted by $B$. We provide upper and lower bounds on the number of mistakes, which depend on the total number of changes $B$, the total number of structures $n$, and natural measures of complexity of the dynamic models: the size of the pool of candidate structures in the Shifting Target model, and the maximum degree of the evolution graph in the Drifting Target model.}
}

Endnote

%0 Conference Paper
%T Interactive Learning of a Dynamic Structure
%A Ehsan Emamjomeh-Zadeh
%A David Kempe
%A Mohammad Mahdian
%A Robert E. Schapire
%B Proceedings of the 31st International Conference on Algorithmic Learning Theory
%C Proceedings of Machine Learning Research
%D 2020
%E Aryeh Kontorovich
%E Gergely Neu
%F pmlr-v117-emamjomeh-zadeh20a
%I PMLR
%P 277--296
%U https://proceedings.mlr.press/v117/emamjomeh-zadeh20a.html
%V 117
%X We propose a general framework for interactively learning combinatorial structures, such as (binary or non-binary) classifiers, orderings/rankings of items, or clusterings, when the underlying structure changes over time. Inspired by Angluin’s equivalence query model, the algorithm proposes a structure in each round, and it either learns that its proposal is the true structure in this round, or it observes a specific mistake in the proposal. The feedback is correct only with probability $1 - p$, and adversarially incorrect with probability $p$. The algorithm’s goal is to minimize its number of mistakes over the course of $R$ rounds. Our general framework is based on a graph representation of the structures and feedback in a static environment, proposed by Emamjomeh-Zadeh and Kempe (2017). To be able to learn efficiently, it is sufficient that there be a graph $G$ whose nodes are the candidate structures and whose (weighted) edges capture the possible feedback, satisfying a certain natural shortest paths property. To model the evolution of the underlying structure, we consider two natural models, which we term the Shifting Target model and Drifting Target model. In the former, the true structure always belongs to a small pool of candidate structures. In the latter, the structure can change only by transitioning along the edges of a known evolution graph. In order to achieve non-trivial results, we bound the total number of times the underlying structure can change, denoted by $B$. We provide upper and lower bounds on the number of mistakes, which depend on the total number of changes $B$, the total number of structures $n$, and natural measures of complexity of the dynamic models: the size of the pool of candidate structures in the Shifting Target model, and the maximum degree of the evolution graph in the Drifting Target model.

APA

Emamjomeh-Zadeh, E., Kempe, D., Mahdian, M. & Schapire, R.E.. (2020). Interactive Learning of a Dynamic Structure. Proceedings of the 31st International Conference on Algorithmic Learning Theory, in Proceedings of Machine Learning Research 117:277-296 Available from https://proceedings.mlr.press/v117/emamjomeh-zadeh20a.html.