Fast and Scalable Bayesian Deep Learning by Weight-Perturbation in Adam

Mohammad Khan, Didrik Nielsen, Voot Tangkaratt, Wu Lin, Yarin Gal, Akash Srivastava
Proceedings of the 35th International Conference on Machine Learning, PMLR 80:2611-2620, 2018.

Abstract

Uncertainty computation in deep learning is essential to design robust and reliable systems. Variational inference (VI) is a promising approach for such computation, but requires more effort to implement and execute compared to maximum-likelihood methods. In this paper, we propose new natural-gradient algorithms to reduce such efforts for Gaussian mean-field VI. Our algorithms can be implemented within the Adam optimizer by perturbing the network weights during gradient evaluations, and uncertainty estimates can be cheaply obtained by using the vector that adapts the learning rate. This requires lower memory, computation, and implementation effort than existing VI methods, while obtaining uncertainty estimates of comparable quality. Our empirical results confirm this and further suggest that the weight-perturbation in our algorithm could be useful for exploration in reinforcement learning and stochastic optimization.

Cite this Paper


BibTeX
@InProceedings{pmlr-v80-khan18a, title = {Fast and Scalable {B}ayesian Deep Learning by Weight-Perturbation in {A}dam}, author = {Khan, Mohammad and Nielsen, Didrik and Tangkaratt, Voot and Lin, Wu and Gal, Yarin and Srivastava, Akash}, booktitle = {Proceedings of the 35th International Conference on Machine Learning}, pages = {2611--2620}, year = {2018}, editor = {Dy, Jennifer and Krause, Andreas}, volume = {80}, series = {Proceedings of Machine Learning Research}, month = {10--15 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v80/khan18a/khan18a.pdf}, url = {https://proceedings.mlr.press/v80/khan18a.html}, abstract = {Uncertainty computation in deep learning is essential to design robust and reliable systems. Variational inference (VI) is a promising approach for such computation, but requires more effort to implement and execute compared to maximum-likelihood methods. In this paper, we propose new natural-gradient algorithms to reduce such efforts for Gaussian mean-field VI. Our algorithms can be implemented within the Adam optimizer by perturbing the network weights during gradient evaluations, and uncertainty estimates can be cheaply obtained by using the vector that adapts the learning rate. This requires lower memory, computation, and implementation effort than existing VI methods, while obtaining uncertainty estimates of comparable quality. Our empirical results confirm this and further suggest that the weight-perturbation in our algorithm could be useful for exploration in reinforcement learning and stochastic optimization.} }
Endnote
%0 Conference Paper %T Fast and Scalable Bayesian Deep Learning by Weight-Perturbation in Adam %A Mohammad Khan %A Didrik Nielsen %A Voot Tangkaratt %A Wu Lin %A Yarin Gal %A Akash Srivastava %B Proceedings of the 35th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2018 %E Jennifer Dy %E Andreas Krause %F pmlr-v80-khan18a %I PMLR %P 2611--2620 %U https://proceedings.mlr.press/v80/khan18a.html %V 80 %X Uncertainty computation in deep learning is essential to design robust and reliable systems. Variational inference (VI) is a promising approach for such computation, but requires more effort to implement and execute compared to maximum-likelihood methods. In this paper, we propose new natural-gradient algorithms to reduce such efforts for Gaussian mean-field VI. Our algorithms can be implemented within the Adam optimizer by perturbing the network weights during gradient evaluations, and uncertainty estimates can be cheaply obtained by using the vector that adapts the learning rate. This requires lower memory, computation, and implementation effort than existing VI methods, while obtaining uncertainty estimates of comparable quality. Our empirical results confirm this and further suggest that the weight-perturbation in our algorithm could be useful for exploration in reinforcement learning and stochastic optimization.
APA
Khan, M., Nielsen, D., Tangkaratt, V., Lin, W., Gal, Y. & Srivastava, A.. (2018). Fast and Scalable Bayesian Deep Learning by Weight-Perturbation in Adam. Proceedings of the 35th International Conference on Machine Learning, in Proceedings of Machine Learning Research 80:2611-2620 Available from https://proceedings.mlr.press/v80/khan18a.html.

Related Material