Parsimonious Learning-Augmented Caching

Sungjin Im, Ravi Kumar, Aditya Petety, Manish Purohit
Proceedings of the 39th International Conference on Machine Learning, PMLR 162:9588-9601, 2022.

Abstract

Learning-augmented algorithms—in which, traditional algorithms are augmented with machine-learned predictions—have emerged as a framework to go beyond worst-case analysis. The overarching goal is to design algorithms that perform near-optimally when the predictions are accurate yet retain certain worst-case guarantees irrespective of the accuracy of the predictions. This framework has been successfully applied to online problems such as caching where the predictions can be used to alleviate uncertainties. In this paper we introduce and study the setting in which the learning-augmented algorithm can utilize the predictions parsimoniously. We consider the caching problem—which has been extensively studied in the learning-augmented setting—and show that one can achieve quantitatively similar results but only using a sublinear number of predictions.

Cite this Paper


BibTeX
@InProceedings{pmlr-v162-im22a, title = {Parsimonious Learning-Augmented Caching}, author = {Im, Sungjin and Kumar, Ravi and Petety, Aditya and Purohit, Manish}, booktitle = {Proceedings of the 39th International Conference on Machine Learning}, pages = {9588--9601}, year = {2022}, editor = {Chaudhuri, Kamalika and Jegelka, Stefanie and Song, Le and Szepesvari, Csaba and Niu, Gang and Sabato, Sivan}, volume = {162}, series = {Proceedings of Machine Learning Research}, month = {17--23 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v162/im22a/im22a.pdf}, url = {https://proceedings.mlr.press/v162/im22a.html}, abstract = {Learning-augmented algorithms—in which, traditional algorithms are augmented with machine-learned predictions—have emerged as a framework to go beyond worst-case analysis. The overarching goal is to design algorithms that perform near-optimally when the predictions are accurate yet retain certain worst-case guarantees irrespective of the accuracy of the predictions. This framework has been successfully applied to online problems such as caching where the predictions can be used to alleviate uncertainties. In this paper we introduce and study the setting in which the learning-augmented algorithm can utilize the predictions parsimoniously. We consider the caching problem—which has been extensively studied in the learning-augmented setting—and show that one can achieve quantitatively similar results but only using a sublinear number of predictions.} }
Endnote
%0 Conference Paper %T Parsimonious Learning-Augmented Caching %A Sungjin Im %A Ravi Kumar %A Aditya Petety %A Manish Purohit %B Proceedings of the 39th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2022 %E Kamalika Chaudhuri %E Stefanie Jegelka %E Le Song %E Csaba Szepesvari %E Gang Niu %E Sivan Sabato %F pmlr-v162-im22a %I PMLR %P 9588--9601 %U https://proceedings.mlr.press/v162/im22a.html %V 162 %X Learning-augmented algorithms—in which, traditional algorithms are augmented with machine-learned predictions—have emerged as a framework to go beyond worst-case analysis. The overarching goal is to design algorithms that perform near-optimally when the predictions are accurate yet retain certain worst-case guarantees irrespective of the accuracy of the predictions. This framework has been successfully applied to online problems such as caching where the predictions can be used to alleviate uncertainties. In this paper we introduce and study the setting in which the learning-augmented algorithm can utilize the predictions parsimoniously. We consider the caching problem—which has been extensively studied in the learning-augmented setting—and show that one can achieve quantitatively similar results but only using a sublinear number of predictions.
APA
Im, S., Kumar, R., Petety, A. & Purohit, M.. (2022). Parsimonious Learning-Augmented Caching. Proceedings of the 39th International Conference on Machine Learning, in Proceedings of Machine Learning Research 162:9588-9601 Available from https://proceedings.mlr.press/v162/im22a.html.

Related Material