The Predicted-Updates Dynamic Model: Offline, Incremental, and Decremental to Fully Dynamic Transformations

Quanquan C. Liu, Vaidehi Srinivas
Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:3582-3641, 2024.

Abstract

The main bottleneck in designing efficient dynamic algorithms is the unknown nature of the update sequence. In particular, there are problems where the separation in runtime between the best offline or partially dynamic solutions and the best fully dynamic solutions is polynomial, sometimes even exponential. In this paper, we formulate the \emph{predicted-updates dynamic model}, one of the first \emph{beyond-worst-case} models for dynamic algorithms, which generalizes a large set of well-studied dynamic models including the offline dynamic, incremental, and decremental models to the fully dynamic setting when given predictions about the update times of the elements. Our paper models real world settings, in which we often have access to side information that allows us to make coarse predictions about future updates. We formulate a framework that bridges the gap between fully and offline/partially dynamic, leading to greatly improved runtime bounds over the state-of-the-art dynamic algorithms for a variety of important problems such as triconnectivity, planar digraph all pairs shortest paths, \(k\)-edge connectivity, and others, for prediction error of reasonable magnitude. Our simple framework avoids heavy machinery, potentially leading to a new set of dynamic algorithms that are implementable in practice.

Cite this Paper


BibTeX
@InProceedings{pmlr-v247-liu24c, title = {The Predicted-Updates Dynamic Model: Offline, Incremental, and Decremental to Fully Dynamic Transformations}, author = {Liu, Quanquan C. and Srinivas, Vaidehi}, booktitle = {Proceedings of Thirty Seventh Conference on Learning Theory}, pages = {3582--3641}, year = {2024}, editor = {Agrawal, Shipra and Roth, Aaron}, volume = {247}, series = {Proceedings of Machine Learning Research}, month = {30 Jun--03 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v247/liu24c/liu24c.pdf}, url = {https://proceedings.mlr.press/v247/liu24c.html}, abstract = {The main bottleneck in designing efficient dynamic algorithms is the unknown nature of the update sequence. In particular, there are problems where the separation in runtime between the best offline or partially dynamic solutions and the best fully dynamic solutions is polynomial, sometimes even exponential. In this paper, we formulate the \emph{predicted-updates dynamic model}, one of the first \emph{beyond-worst-case} models for dynamic algorithms, which generalizes a large set of well-studied dynamic models including the offline dynamic, incremental, and decremental models to the fully dynamic setting when given predictions about the update times of the elements. Our paper models real world settings, in which we often have access to side information that allows us to make coarse predictions about future updates. We formulate a framework that bridges the gap between fully and offline/partially dynamic, leading to greatly improved runtime bounds over the state-of-the-art dynamic algorithms for a variety of important problems such as triconnectivity, planar digraph all pairs shortest paths, \(k\)-edge connectivity, and others, for prediction error of reasonable magnitude. Our simple framework avoids heavy machinery, potentially leading to a new set of dynamic algorithms that are implementable in practice.} }
Endnote
%0 Conference Paper %T The Predicted-Updates Dynamic Model: Offline, Incremental, and Decremental to Fully Dynamic Transformations %A Quanquan C. Liu %A Vaidehi Srinivas %B Proceedings of Thirty Seventh Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2024 %E Shipra Agrawal %E Aaron Roth %F pmlr-v247-liu24c %I PMLR %P 3582--3641 %U https://proceedings.mlr.press/v247/liu24c.html %V 247 %X The main bottleneck in designing efficient dynamic algorithms is the unknown nature of the update sequence. In particular, there are problems where the separation in runtime between the best offline or partially dynamic solutions and the best fully dynamic solutions is polynomial, sometimes even exponential. In this paper, we formulate the \emph{predicted-updates dynamic model}, one of the first \emph{beyond-worst-case} models for dynamic algorithms, which generalizes a large set of well-studied dynamic models including the offline dynamic, incremental, and decremental models to the fully dynamic setting when given predictions about the update times of the elements. Our paper models real world settings, in which we often have access to side information that allows us to make coarse predictions about future updates. We formulate a framework that bridges the gap between fully and offline/partially dynamic, leading to greatly improved runtime bounds over the state-of-the-art dynamic algorithms for a variety of important problems such as triconnectivity, planar digraph all pairs shortest paths, \(k\)-edge connectivity, and others, for prediction error of reasonable magnitude. Our simple framework avoids heavy machinery, potentially leading to a new set of dynamic algorithms that are implementable in practice.
APA
Liu, Q.C. & Srinivas, V.. (2024). The Predicted-Updates Dynamic Model: Offline, Incremental, and Decremental to Fully Dynamic Transformations. Proceedings of Thirty Seventh Conference on Learning Theory, in Proceedings of Machine Learning Research 247:3582-3641 Available from https://proceedings.mlr.press/v247/liu24c.html.

Related Material