Anti-differentiating approximation algorithms:A case study with min-cuts, spectral, and flow

David Gleich, Michael Mahoney
Proceedings of the 31st International Conference on Machine Learning, PMLR 32(2):1018-1025, 2014.

Abstract

We formalize and illustrate the general concept of algorithmic anti-differentiation: given an algorithmic procedure, e.g., an approximation algorithm for which worst-case approximation guarantees are available or a heuristic that has been engineered to be practically-useful but for which a precise theoretical understanding is lacking, an algorithmic anti-derivative is a precise statement of an optimization problem that is exactly solved by that procedure. We explore this concept with a case study of approximation algorithms for finding locally-biased partitions in data graphs, demonstrating connections between min-cut objectives, a personalized version of the popular PageRank vector, and the highly effective "push" procedure for computing an approximation to personalized PageRank. We show, for example, that this latter algorithm solves (exactly, but implicitly) an l1-regularized l2-regression problem, a fact that helps to explain its excellent performance in practice. We expect that, when available, these implicit optimization problems will be critical for rationalizing and predicting the performance of many approximation algorithms on realistic data.

Cite this Paper


BibTeX
@InProceedings{pmlr-v32-gleich14, title = {Anti-differentiating approximation algorithms:A case study with min-cuts, spectral, and flow}, author = {Gleich, David and Mahoney, Michael}, booktitle = {Proceedings of the 31st International Conference on Machine Learning}, pages = {1018--1025}, year = {2014}, editor = {Xing, Eric P. and Jebara, Tony}, volume = {32}, number = {2}, series = {Proceedings of Machine Learning Research}, address = {Bejing, China}, month = {22--24 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v32/gleich14.pdf}, url = {https://proceedings.mlr.press/v32/gleich14.html}, abstract = {We formalize and illustrate the general concept of algorithmic anti-differentiation: given an algorithmic procedure, e.g., an approximation algorithm for which worst-case approximation guarantees are available or a heuristic that has been engineered to be practically-useful but for which a precise theoretical understanding is lacking, an algorithmic anti-derivative is a precise statement of an optimization problem that is exactly solved by that procedure. We explore this concept with a case study of approximation algorithms for finding locally-biased partitions in data graphs, demonstrating connections between min-cut objectives, a personalized version of the popular PageRank vector, and the highly effective "push" procedure for computing an approximation to personalized PageRank. We show, for example, that this latter algorithm solves (exactly, but implicitly) an l1-regularized l2-regression problem, a fact that helps to explain its excellent performance in practice. We expect that, when available, these implicit optimization problems will be critical for rationalizing and predicting the performance of many approximation algorithms on realistic data.} }
Endnote
%0 Conference Paper %T Anti-differentiating approximation algorithms:A case study with min-cuts, spectral, and flow %A David Gleich %A Michael Mahoney %B Proceedings of the 31st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2014 %E Eric P. Xing %E Tony Jebara %F pmlr-v32-gleich14 %I PMLR %P 1018--1025 %U https://proceedings.mlr.press/v32/gleich14.html %V 32 %N 2 %X We formalize and illustrate the general concept of algorithmic anti-differentiation: given an algorithmic procedure, e.g., an approximation algorithm for which worst-case approximation guarantees are available or a heuristic that has been engineered to be practically-useful but for which a precise theoretical understanding is lacking, an algorithmic anti-derivative is a precise statement of an optimization problem that is exactly solved by that procedure. We explore this concept with a case study of approximation algorithms for finding locally-biased partitions in data graphs, demonstrating connections between min-cut objectives, a personalized version of the popular PageRank vector, and the highly effective "push" procedure for computing an approximation to personalized PageRank. We show, for example, that this latter algorithm solves (exactly, but implicitly) an l1-regularized l2-regression problem, a fact that helps to explain its excellent performance in practice. We expect that, when available, these implicit optimization problems will be critical for rationalizing and predicting the performance of many approximation algorithms on realistic data.
RIS
TY - CPAPER TI - Anti-differentiating approximation algorithms:A case study with min-cuts, spectral, and flow AU - David Gleich AU - Michael Mahoney BT - Proceedings of the 31st International Conference on Machine Learning DA - 2014/06/18 ED - Eric P. Xing ED - Tony Jebara ID - pmlr-v32-gleich14 PB - PMLR DP - Proceedings of Machine Learning Research VL - 32 IS - 2 SP - 1018 EP - 1025 L1 - http://proceedings.mlr.press/v32/gleich14.pdf UR - https://proceedings.mlr.press/v32/gleich14.html AB - We formalize and illustrate the general concept of algorithmic anti-differentiation: given an algorithmic procedure, e.g., an approximation algorithm for which worst-case approximation guarantees are available or a heuristic that has been engineered to be practically-useful but for which a precise theoretical understanding is lacking, an algorithmic anti-derivative is a precise statement of an optimization problem that is exactly solved by that procedure. We explore this concept with a case study of approximation algorithms for finding locally-biased partitions in data graphs, demonstrating connections between min-cut objectives, a personalized version of the popular PageRank vector, and the highly effective "push" procedure for computing an approximation to personalized PageRank. We show, for example, that this latter algorithm solves (exactly, but implicitly) an l1-regularized l2-regression problem, a fact that helps to explain its excellent performance in practice. We expect that, when available, these implicit optimization problems will be critical for rationalizing and predicting the performance of many approximation algorithms on realistic data. ER -
APA
Gleich, D. & Mahoney, M.. (2014). Anti-differentiating approximation algorithms:A case study with min-cuts, spectral, and flow. Proceedings of the 31st International Conference on Machine Learning, in Proceedings of Machine Learning Research 32(2):1018-1025 Available from https://proceedings.mlr.press/v32/gleich14.html.

Related Material