Super-efficiency of automatic differentiation for functions defined as a minimum

Pierre Ablin, Gabriel Peyré, Thomas Moreau
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:32-41, 2020.

Abstract

In min-min optimization or max-min optimization, one has to compute the gradient of a function defined as a minimum. In most cases, the minimum has no closed-form, and an approximation is obtained via an iterative algorithm. There are two usual ways of estimating the gradient of the function: using either an analytic formula obtained by assuming exactness of the approximation, or automatic differentiation through the algorithm. In this paper, we study the asymptotic error made by these estimators as a function of the optimization error. We find that the error of the automatic estimator is close to the square of the error of the analytic estimator, reflecting a super-efficiency phenomenon. The convergence of the automatic estimator greatly depends on the convergence of the Jacobian of the algorithm. We analyze it for gradient descent and stochastic gradient descent and derive convergence rates for the estimators in these cases. Our analysis is backed by numerical experiments on toy problems and on Wasserstein barycenter computation. Finally, we discuss the computational complexity of these estimators and give practical guidelines to chose between them.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-ablin20a, title = {Super-efficiency of automatic differentiation for functions defined as a minimum}, author = {Ablin, Pierre and Peyr{\'e}, Gabriel and Moreau, Thomas}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {32--41}, year = {2020}, editor = {III, Hal Daumé and Singh, Aarti}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/ablin20a/ablin20a.pdf}, url = {https://proceedings.mlr.press/v119/ablin20a.html}, abstract = {In min-min optimization or max-min optimization, one has to compute the gradient of a function defined as a minimum. In most cases, the minimum has no closed-form, and an approximation is obtained via an iterative algorithm. There are two usual ways of estimating the gradient of the function: using either an analytic formula obtained by assuming exactness of the approximation, or automatic differentiation through the algorithm. In this paper, we study the asymptotic error made by these estimators as a function of the optimization error. We find that the error of the automatic estimator is close to the square of the error of the analytic estimator, reflecting a super-efficiency phenomenon. The convergence of the automatic estimator greatly depends on the convergence of the Jacobian of the algorithm. We analyze it for gradient descent and stochastic gradient descent and derive convergence rates for the estimators in these cases. Our analysis is backed by numerical experiments on toy problems and on Wasserstein barycenter computation. Finally, we discuss the computational complexity of these estimators and give practical guidelines to chose between them.} }
Endnote
%0 Conference Paper %T Super-efficiency of automatic differentiation for functions defined as a minimum %A Pierre Ablin %A Gabriel Peyré %A Thomas Moreau %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-ablin20a %I PMLR %P 32--41 %U https://proceedings.mlr.press/v119/ablin20a.html %V 119 %X In min-min optimization or max-min optimization, one has to compute the gradient of a function defined as a minimum. In most cases, the minimum has no closed-form, and an approximation is obtained via an iterative algorithm. There are two usual ways of estimating the gradient of the function: using either an analytic formula obtained by assuming exactness of the approximation, or automatic differentiation through the algorithm. In this paper, we study the asymptotic error made by these estimators as a function of the optimization error. We find that the error of the automatic estimator is close to the square of the error of the analytic estimator, reflecting a super-efficiency phenomenon. The convergence of the automatic estimator greatly depends on the convergence of the Jacobian of the algorithm. We analyze it for gradient descent and stochastic gradient descent and derive convergence rates for the estimators in these cases. Our analysis is backed by numerical experiments on toy problems and on Wasserstein barycenter computation. Finally, we discuss the computational complexity of these estimators and give practical guidelines to chose between them.
APA
Ablin, P., Peyré, G. & Moreau, T.. (2020). Super-efficiency of automatic differentiation for functions defined as a minimum. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:32-41 Available from https://proceedings.mlr.press/v119/ablin20a.html.

Related Material