Local Optimality and Generalization Guarantees for the Langevin Algorithm via Empirical Metastability

Belinda Tzen, Tengyuan Liang, Maxim Raginsky
Proceedings of the 31st Conference On Learning Theory, PMLR 75:857-875, 2018.

Abstract

We study the detailed path-wise behavior of the discrete-time Langevin algorithm for non-convex Empirical Risk Minimization (ERM) through the lens of metastability, adopting some techniques from Berglund and Gentz (2003). For a particular local optimum of the empirical risk, with an \textit{arbitrary initialization}, we show that, with high probability, at least one of the following two events will occur: (1) the Langevin trajectory ends up somewhere outside the $\varepsilon$-neighborhood of this particular optimum within a short \textit{recurrence time}; (2) it enters this $\varepsilon$-neighborhood by the recurrence time and stays there until a potentially exponentially long \textit{escape time}. We call this phenomenon \textit{empirical metastability}. This two-timescale characterization aligns nicely with the existing literature in the following two senses. First, the effective recurrence time (i.e., number of iterations multiplied by stepsize) is dimension-independent, and resembles the convergence time of continuous-time deterministic Gradient Descent (GD). However unlike GD, the Langevin algorithm does not require strong conditions on local initialization, and has the possibility of eventually visiting all optima. Second, the scaling of the escape time is consistent with the Eyring-Kramers law, which states that the Langevin scheme will eventually visit all local minima, but it will take an exponentially long time to transit among them. We apply this path-wise concentration result in the context of statistical learning to examine local notions of generalization and optimality.

Cite this Paper


BibTeX
@InProceedings{pmlr-v75-tzen18a, title = {Local Optimality and Generalization Guarantees for the Langevin Algorithm via Empirical Metastability}, author = {Tzen, Belinda and Liang, Tengyuan and Raginsky, Maxim}, booktitle = {Proceedings of the 31st Conference On Learning Theory}, pages = {857--875}, year = {2018}, editor = {Bubeck, Sébastien and Perchet, Vianney and Rigollet, Philippe}, volume = {75}, series = {Proceedings of Machine Learning Research}, month = {06--09 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v75/tzen18a/tzen18a.pdf}, url = {https://proceedings.mlr.press/v75/tzen18a.html}, abstract = {We study the detailed path-wise behavior of the discrete-time Langevin algorithm for non-convex Empirical Risk Minimization (ERM) through the lens of metastability, adopting some techniques from Berglund and Gentz (2003). For a particular local optimum of the empirical risk, with an \textit{arbitrary initialization}, we show that, with high probability, at least one of the following two events will occur: (1) the Langevin trajectory ends up somewhere outside the $\varepsilon$-neighborhood of this particular optimum within a short \textit{recurrence time}; (2) it enters this $\varepsilon$-neighborhood by the recurrence time and stays there until a potentially exponentially long \textit{escape time}. We call this phenomenon \textit{empirical metastability}. This two-timescale characterization aligns nicely with the existing literature in the following two senses. First, the effective recurrence time (i.e., number of iterations multiplied by stepsize) is dimension-independent, and resembles the convergence time of continuous-time deterministic Gradient Descent (GD). However unlike GD, the Langevin algorithm does not require strong conditions on local initialization, and has the possibility of eventually visiting all optima. Second, the scaling of the escape time is consistent with the Eyring-Kramers law, which states that the Langevin scheme will eventually visit all local minima, but it will take an exponentially long time to transit among them. We apply this path-wise concentration result in the context of statistical learning to examine local notions of generalization and optimality. } }
Endnote
%0 Conference Paper %T Local Optimality and Generalization Guarantees for the Langevin Algorithm via Empirical Metastability %A Belinda Tzen %A Tengyuan Liang %A Maxim Raginsky %B Proceedings of the 31st Conference On Learning Theory %C Proceedings of Machine Learning Research %D 2018 %E Sébastien Bubeck %E Vianney Perchet %E Philippe Rigollet %F pmlr-v75-tzen18a %I PMLR %P 857--875 %U https://proceedings.mlr.press/v75/tzen18a.html %V 75 %X We study the detailed path-wise behavior of the discrete-time Langevin algorithm for non-convex Empirical Risk Minimization (ERM) through the lens of metastability, adopting some techniques from Berglund and Gentz (2003). For a particular local optimum of the empirical risk, with an \textit{arbitrary initialization}, we show that, with high probability, at least one of the following two events will occur: (1) the Langevin trajectory ends up somewhere outside the $\varepsilon$-neighborhood of this particular optimum within a short \textit{recurrence time}; (2) it enters this $\varepsilon$-neighborhood by the recurrence time and stays there until a potentially exponentially long \textit{escape time}. We call this phenomenon \textit{empirical metastability}. This two-timescale characterization aligns nicely with the existing literature in the following two senses. First, the effective recurrence time (i.e., number of iterations multiplied by stepsize) is dimension-independent, and resembles the convergence time of continuous-time deterministic Gradient Descent (GD). However unlike GD, the Langevin algorithm does not require strong conditions on local initialization, and has the possibility of eventually visiting all optima. Second, the scaling of the escape time is consistent with the Eyring-Kramers law, which states that the Langevin scheme will eventually visit all local minima, but it will take an exponentially long time to transit among them. We apply this path-wise concentration result in the context of statistical learning to examine local notions of generalization and optimality.
APA
Tzen, B., Liang, T. & Raginsky, M.. (2018). Local Optimality and Generalization Guarantees for the Langevin Algorithm via Empirical Metastability. Proceedings of the 31st Conference On Learning Theory, in Proceedings of Machine Learning Research 75:857-875 Available from https://proceedings.mlr.press/v75/tzen18a.html.

Related Material