Nyström Method for Accurate and Scalable Implicit Differentiation

Ryuichiro Hataya, Makoto Yamada
Proceedings of The 26th International Conference on Artificial Intelligence and Statistics, PMLR 206:4643-4654, 2023.

Abstract

The essential difficulty of gradient-based bilevel optimization using implicit differentiation is to estimate the inverse Hessian vector product with respect to neural network parameters. This paper proposes to tackle this problem by the Nyström method and the Woodbury matrix identity, exploiting the low-rankness of the Hessian. Compared to existing methods using iterative approximation, such as conjugate gradient and the Neumann series approximation, the proposed method avoids numerical instability and can be efficiently computed in matrix operations without iterations. As a result, the proposed method works stably in various tasks and is faster than iterative approximations. Throughout experiments including large-scale hyperparameter optimization and meta learning, we demonstrate that the Nyström method consistently achieves comparable or even superior performance to other approaches. The source code is available from https://github.com/moskomule/hypergrad.

Cite this Paper


BibTeX
@InProceedings{pmlr-v206-hataya23a, title = {Nyström Method for Accurate and Scalable Implicit Differentiation}, author = {Hataya, Ryuichiro and Yamada, Makoto}, booktitle = {Proceedings of The 26th International Conference on Artificial Intelligence and Statistics}, pages = {4643--4654}, year = {2023}, editor = {Ruiz, Francisco and Dy, Jennifer and van de Meent, Jan-Willem}, volume = {206}, series = {Proceedings of Machine Learning Research}, month = {25--27 Apr}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v206/hataya23a/hataya23a.pdf}, url = {https://proceedings.mlr.press/v206/hataya23a.html}, abstract = {The essential difficulty of gradient-based bilevel optimization using implicit differentiation is to estimate the inverse Hessian vector product with respect to neural network parameters. This paper proposes to tackle this problem by the Nyström method and the Woodbury matrix identity, exploiting the low-rankness of the Hessian. Compared to existing methods using iterative approximation, such as conjugate gradient and the Neumann series approximation, the proposed method avoids numerical instability and can be efficiently computed in matrix operations without iterations. As a result, the proposed method works stably in various tasks and is faster than iterative approximations. Throughout experiments including large-scale hyperparameter optimization and meta learning, we demonstrate that the Nyström method consistently achieves comparable or even superior performance to other approaches. The source code is available from https://github.com/moskomule/hypergrad.} }
Endnote
%0 Conference Paper %T Nyström Method for Accurate and Scalable Implicit Differentiation %A Ryuichiro Hataya %A Makoto Yamada %B Proceedings of The 26th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2023 %E Francisco Ruiz %E Jennifer Dy %E Jan-Willem van de Meent %F pmlr-v206-hataya23a %I PMLR %P 4643--4654 %U https://proceedings.mlr.press/v206/hataya23a.html %V 206 %X The essential difficulty of gradient-based bilevel optimization using implicit differentiation is to estimate the inverse Hessian vector product with respect to neural network parameters. This paper proposes to tackle this problem by the Nyström method and the Woodbury matrix identity, exploiting the low-rankness of the Hessian. Compared to existing methods using iterative approximation, such as conjugate gradient and the Neumann series approximation, the proposed method avoids numerical instability and can be efficiently computed in matrix operations without iterations. As a result, the proposed method works stably in various tasks and is faster than iterative approximations. Throughout experiments including large-scale hyperparameter optimization and meta learning, we demonstrate that the Nyström method consistently achieves comparable or even superior performance to other approaches. The source code is available from https://github.com/moskomule/hypergrad.
APA
Hataya, R. & Yamada, M.. (2023). Nyström Method for Accurate and Scalable Implicit Differentiation. Proceedings of The 26th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 206:4643-4654 Available from https://proceedings.mlr.press/v206/hataya23a.html.

Related Material