Learning to Prune: Speeding up Repeated Computations

Daniel Alabi, Adam Tauman Kalai, Katrina Liggett, Cameron Musco, Christos Tzamos, Ellen Vitercik
Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:30-33, 2019.

Abstract

Algorithms often must solve sequences of closely related problems. If the algorithm runs a standard procedure with worst-case runtime guarantees on each instance, it will fail to take advantage of valuable structure shared across the problem instances. When a commuter drives from work to home, for example, there are typically only a handful of routes that will ever be the shortest path. A naïve algorithm that does not exploit this common structure may spend most of its time checking roads that will never be in the shortest path. More generally, we can often ignore large swaths of the search space that will likely never contain an optimal solution. We present an algorithm that learns to maximally prune the search space on repeated computations, thereby reducing runtime while provably outputting the correct solution each period with high probability. Our algorithm employs a simple explore-exploit technique resembling those used in online algorithms, though our setting is quite different. We prove that, with respect to our model of pruning search spaces, our approach is optimal up to constant factors. Finally, we illustrate the applicability of our model and algorithm to three classic problems: shortest-path routing, string search, and linear programming. We present experiments confirming that our simple algorithm is effective at significantly reducing the runtime of solving repeated computations.

Cite this Paper


BibTeX
@InProceedings{pmlr-v99-alabi19a, title = {Learning to Prune: Speeding up Repeated Computations}, author = {Alabi, Daniel and Kalai, Adam Tauman and Liggett, Katrina and Musco, Cameron and Tzamos, Christos and Vitercik, Ellen}, booktitle = {Proceedings of the Thirty-Second Conference on Learning Theory}, pages = {30--33}, year = {2019}, editor = {Beygelzimer, Alina and Hsu, Daniel}, volume = {99}, series = {Proceedings of Machine Learning Research}, month = {25--28 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v99/alabi19a/alabi19a.pdf}, url = {https://proceedings.mlr.press/v99/alabi19a.html}, abstract = {Algorithms often must solve sequences of closely related problems. If the algorithm runs a standard procedure with worst-case runtime guarantees on each instance, it will fail to take advantage of valuable structure shared across the problem instances. When a commuter drives from work to home, for example, there are typically only a handful of routes that will ever be the shortest path. A naïve algorithm that does not exploit this common structure may spend most of its time checking roads that will never be in the shortest path. More generally, we can often ignore large swaths of the search space that will likely never contain an optimal solution. We present an algorithm that learns to maximally prune the search space on repeated computations, thereby reducing runtime while provably outputting the correct solution each period with high probability. Our algorithm employs a simple explore-exploit technique resembling those used in online algorithms, though our setting is quite different. We prove that, with respect to our model of pruning search spaces, our approach is optimal up to constant factors. Finally, we illustrate the applicability of our model and algorithm to three classic problems: shortest-path routing, string search, and linear programming. We present experiments confirming that our simple algorithm is effective at significantly reducing the runtime of solving repeated computations.} }
Endnote
%0 Conference Paper %T Learning to Prune: Speeding up Repeated Computations %A Daniel Alabi %A Adam Tauman Kalai %A Katrina Liggett %A Cameron Musco %A Christos Tzamos %A Ellen Vitercik %B Proceedings of the Thirty-Second Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2019 %E Alina Beygelzimer %E Daniel Hsu %F pmlr-v99-alabi19a %I PMLR %P 30--33 %U https://proceedings.mlr.press/v99/alabi19a.html %V 99 %X Algorithms often must solve sequences of closely related problems. If the algorithm runs a standard procedure with worst-case runtime guarantees on each instance, it will fail to take advantage of valuable structure shared across the problem instances. When a commuter drives from work to home, for example, there are typically only a handful of routes that will ever be the shortest path. A naïve algorithm that does not exploit this common structure may spend most of its time checking roads that will never be in the shortest path. More generally, we can often ignore large swaths of the search space that will likely never contain an optimal solution. We present an algorithm that learns to maximally prune the search space on repeated computations, thereby reducing runtime while provably outputting the correct solution each period with high probability. Our algorithm employs a simple explore-exploit technique resembling those used in online algorithms, though our setting is quite different. We prove that, with respect to our model of pruning search spaces, our approach is optimal up to constant factors. Finally, we illustrate the applicability of our model and algorithm to three classic problems: shortest-path routing, string search, and linear programming. We present experiments confirming that our simple algorithm is effective at significantly reducing the runtime of solving repeated computations.
APA
Alabi, D., Kalai, A.T., Liggett, K., Musco, C., Tzamos, C. & Vitercik, E.. (2019). Learning to Prune: Speeding up Repeated Computations. Proceedings of the Thirty-Second Conference on Learning Theory, in Proceedings of Machine Learning Research 99:30-33 Available from https://proceedings.mlr.press/v99/alabi19a.html.

Related Material