Best of many worlds: Robust model selection for online supervised learning

Vidya Muthukumar, Mitas Ray, Anant Sahai, Peter Bartlett
Proceedings of the Twenty-Second International Conference on Artificial Intelligence and Statistics, PMLR 89:3177-3186, 2019.

Abstract

We introduce algorithms for online, full-information prediction that are computationally efficient and competitive with contextual tree experts of unknown complexity, in both probabilistic and adversarial settings. We incorporate a novel probabilistic framework of structural risk minimization into existing adaptive algorithms and show that we can robustly learn not only the presence of stochastic structure when it exists, but also the correct model order. When the stochastic data is actually realized from a predictor in the model class considered, we obtain regret bounds that are competitive with the regret of an optimal algorithm that possesses strong side information about both the true model order and whether the process generating the data is stochastic or adversarial. In cases where the data does not arise from any of the models, our algorithm selects models of higher order as we play more rounds. We display empirically improved \textit{overall prediction error} over other adversarially robust approaches.

Cite this Paper


BibTeX
@InProceedings{pmlr-v89-muthukumar19a, title = {Best of many worlds: Robust model selection for online supervised learning}, author = {Muthukumar, Vidya and Ray, Mitas and Sahai, Anant and Bartlett, Peter}, booktitle = {Proceedings of the Twenty-Second International Conference on Artificial Intelligence and Statistics}, pages = {3177--3186}, year = {2019}, editor = {Chaudhuri, Kamalika and Sugiyama, Masashi}, volume = {89}, series = {Proceedings of Machine Learning Research}, month = {16--18 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v89/muthukumar19a/muthukumar19a.pdf}, url = {https://proceedings.mlr.press/v89/muthukumar19a.html}, abstract = {We introduce algorithms for online, full-information prediction that are computationally efficient and competitive with contextual tree experts of unknown complexity, in both probabilistic and adversarial settings. We incorporate a novel probabilistic framework of structural risk minimization into existing adaptive algorithms and show that we can robustly learn not only the presence of stochastic structure when it exists, but also the correct model order. When the stochastic data is actually realized from a predictor in the model class considered, we obtain regret bounds that are competitive with the regret of an optimal algorithm that possesses strong side information about both the true model order and whether the process generating the data is stochastic or adversarial. In cases where the data does not arise from any of the models, our algorithm selects models of higher order as we play more rounds. We display empirically improved \textit{overall prediction error} over other adversarially robust approaches.} }
Endnote
%0 Conference Paper %T Best of many worlds: Robust model selection for online supervised learning %A Vidya Muthukumar %A Mitas Ray %A Anant Sahai %A Peter Bartlett %B Proceedings of the Twenty-Second International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2019 %E Kamalika Chaudhuri %E Masashi Sugiyama %F pmlr-v89-muthukumar19a %I PMLR %P 3177--3186 %U https://proceedings.mlr.press/v89/muthukumar19a.html %V 89 %X We introduce algorithms for online, full-information prediction that are computationally efficient and competitive with contextual tree experts of unknown complexity, in both probabilistic and adversarial settings. We incorporate a novel probabilistic framework of structural risk minimization into existing adaptive algorithms and show that we can robustly learn not only the presence of stochastic structure when it exists, but also the correct model order. When the stochastic data is actually realized from a predictor in the model class considered, we obtain regret bounds that are competitive with the regret of an optimal algorithm that possesses strong side information about both the true model order and whether the process generating the data is stochastic or adversarial. In cases where the data does not arise from any of the models, our algorithm selects models of higher order as we play more rounds. We display empirically improved \textit{overall prediction error} over other adversarially robust approaches.
APA
Muthukumar, V., Ray, M., Sahai, A. & Bartlett, P.. (2019). Best of many worlds: Robust model selection for online supervised learning. Proceedings of the Twenty-Second International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 89:3177-3186 Available from https://proceedings.mlr.press/v89/muthukumar19a.html.

Related Material