Parsimonious Learning-Augmented Approximations for Dense Instances of $\mathcalNP$-hard Problems

Evripidis Bampis, Bruno Escoffier, Michalis Xefteris
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:2700-2714, 2024.

Abstract

The classical work of (Arora et al., 1999) provides a scheme that gives, for any $\epsilon>0$, a polynomial time $1-\epsilon$ approximation algorithm for dense instances of a family of $\mathcal{NP}$-hard problems, such as Max-CUT and Max-$k$-SAT. In this paper we extend and speed up this scheme using a logarithmic number of one-bit predictions. We propose a learning augmented framework which aims at finding fast algorithms which guarantees approximation consistency, smoothness and robustness with respect to the prediction error. We provide such algorithms, which moreover use predictions parsimoniously, for dense instances of various optimization problems.

Cite this Paper


BibTeX
@InProceedings{pmlr-v235-bampis24a, title = {Parsimonious Learning-Augmented Approximations for Dense Instances of $\mathcal{NP}$-hard Problems}, author = {Bampis, Evripidis and Escoffier, Bruno and Xefteris, Michalis}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {2700--2714}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v235/main/assets/bampis24a/bampis24a.pdf}, url = {https://proceedings.mlr.press/v235/bampis24a.html}, abstract = {The classical work of (Arora et al., 1999) provides a scheme that gives, for any $\epsilon>0$, a polynomial time $1-\epsilon$ approximation algorithm for dense instances of a family of $\mathcal{NP}$-hard problems, such as Max-CUT and Max-$k$-SAT. In this paper we extend and speed up this scheme using a logarithmic number of one-bit predictions. We propose a learning augmented framework which aims at finding fast algorithms which guarantees approximation consistency, smoothness and robustness with respect to the prediction error. We provide such algorithms, which moreover use predictions parsimoniously, for dense instances of various optimization problems.} }
Endnote
%0 Conference Paper %T Parsimonious Learning-Augmented Approximations for Dense Instances of $\mathcalNP$-hard Problems %A Evripidis Bampis %A Bruno Escoffier %A Michalis Xefteris %B Proceedings of the 41st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Ruslan Salakhutdinov %E Zico Kolter %E Katherine Heller %E Adrian Weller %E Nuria Oliver %E Jonathan Scarlett %E Felix Berkenkamp %F pmlr-v235-bampis24a %I PMLR %P 2700--2714 %U https://proceedings.mlr.press/v235/bampis24a.html %V 235 %X The classical work of (Arora et al., 1999) provides a scheme that gives, for any $\epsilon>0$, a polynomial time $1-\epsilon$ approximation algorithm for dense instances of a family of $\mathcal{NP}$-hard problems, such as Max-CUT and Max-$k$-SAT. In this paper we extend and speed up this scheme using a logarithmic number of one-bit predictions. We propose a learning augmented framework which aims at finding fast algorithms which guarantees approximation consistency, smoothness and robustness with respect to the prediction error. We provide such algorithms, which moreover use predictions parsimoniously, for dense instances of various optimization problems.
APA
Bampis, E., Escoffier, B. & Xefteris, M.. (2024). Parsimonious Learning-Augmented Approximations for Dense Instances of $\mathcalNP$-hard Problems. Proceedings of the 41st International Conference on Machine Learning, in Proceedings of Machine Learning Research 235:2700-2714 Available from https://proceedings.mlr.press/v235/bampis24a.html.

Related Material