Learning Online Algorithms with Distributional Advice

Ilias Diakonikolas, Vasilis Kontonis, Christos Tzamos, Ali Vakilian, Nikos Zarifis
Proceedings of the 38th International Conference on Machine Learning, PMLR 139:2687-2696, 2021.

Abstract

We study the problem of designing online algorithms given advice about the input. While prior work had focused on deterministic advice, we only assume distributional access to the instances of interest, and the goal is to learn a competitive algorithm given access to i.i.d. samples. We aim to be competitive against an adversary with prior knowledge of the distribution, while also performing well against worst-case inputs. We focus on the classical online problems of ski-rental and prophet-inequalities, and provide sample complexity bounds for the underlying learning tasks. First, we point out that for general distributions it is information-theoretically impossible to beat the worst-case competitive-ratio with any finite sample size. As our main contribution, we establish strong positive results for well-behaved distributions. Specifically, for the broad class of log-concave distributions, we show that $\mathrm{poly}(1/\epsilon)$ samples suffice to obtain $(1+\epsilon)$-competitive ratio. Finally, we show that this sample upper bound is close to best possible, even for very simple classes of distributions.

Cite this Paper


BibTeX
@InProceedings{pmlr-v139-diakonikolas21a, title = {Learning Online Algorithms with Distributional Advice}, author = {Diakonikolas, Ilias and Kontonis, Vasilis and Tzamos, Christos and Vakilian, Ali and Zarifis, Nikos}, booktitle = {Proceedings of the 38th International Conference on Machine Learning}, pages = {2687--2696}, year = {2021}, editor = {Meila, Marina and Zhang, Tong}, volume = {139}, series = {Proceedings of Machine Learning Research}, month = {18--24 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v139/diakonikolas21a/diakonikolas21a.pdf}, url = {https://proceedings.mlr.press/v139/diakonikolas21a.html}, abstract = {We study the problem of designing online algorithms given advice about the input. While prior work had focused on deterministic advice, we only assume distributional access to the instances of interest, and the goal is to learn a competitive algorithm given access to i.i.d. samples. We aim to be competitive against an adversary with prior knowledge of the distribution, while also performing well against worst-case inputs. We focus on the classical online problems of ski-rental and prophet-inequalities, and provide sample complexity bounds for the underlying learning tasks. First, we point out that for general distributions it is information-theoretically impossible to beat the worst-case competitive-ratio with any finite sample size. As our main contribution, we establish strong positive results for well-behaved distributions. Specifically, for the broad class of log-concave distributions, we show that $\mathrm{poly}(1/\epsilon)$ samples suffice to obtain $(1+\epsilon)$-competitive ratio. Finally, we show that this sample upper bound is close to best possible, even for very simple classes of distributions.} }
Endnote
%0 Conference Paper %T Learning Online Algorithms with Distributional Advice %A Ilias Diakonikolas %A Vasilis Kontonis %A Christos Tzamos %A Ali Vakilian %A Nikos Zarifis %B Proceedings of the 38th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2021 %E Marina Meila %E Tong Zhang %F pmlr-v139-diakonikolas21a %I PMLR %P 2687--2696 %U https://proceedings.mlr.press/v139/diakonikolas21a.html %V 139 %X We study the problem of designing online algorithms given advice about the input. While prior work had focused on deterministic advice, we only assume distributional access to the instances of interest, and the goal is to learn a competitive algorithm given access to i.i.d. samples. We aim to be competitive against an adversary with prior knowledge of the distribution, while also performing well against worst-case inputs. We focus on the classical online problems of ski-rental and prophet-inequalities, and provide sample complexity bounds for the underlying learning tasks. First, we point out that for general distributions it is information-theoretically impossible to beat the worst-case competitive-ratio with any finite sample size. As our main contribution, we establish strong positive results for well-behaved distributions. Specifically, for the broad class of log-concave distributions, we show that $\mathrm{poly}(1/\epsilon)$ samples suffice to obtain $(1+\epsilon)$-competitive ratio. Finally, we show that this sample upper bound is close to best possible, even for very simple classes of distributions.
APA
Diakonikolas, I., Kontonis, V., Tzamos, C., Vakilian, A. & Zarifis, N.. (2021). Learning Online Algorithms with Distributional Advice. Proceedings of the 38th International Conference on Machine Learning, in Proceedings of Machine Learning Research 139:2687-2696 Available from https://proceedings.mlr.press/v139/diakonikolas21a.html.

Related Material