Estimating Model Uncertainty of Neural Networks in Sparse Information Form

Jongseok Lee, Matthias Humt, Jianxiang Feng, Rudolph Triebel
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:5702-5713, 2020.

Abstract

We present a sparse representation of model uncertainty for Deep Neural Networks (DNNs) where the parameter posterior is approximated with an inverse formulation of the Multivariate Normal Distribution (MND), also known as the information form. The key insight of our work is that the information matrix, i.e. the inverse of the covariance matrix tends to be sparse in its spectrum. Therefore, dimensionality reduction techniques such as low rank approximations (LRA) can be effectively exploited. To achieve this, we develop a novel sparsification algorithm and derive a cost-effective analytical sampler. As a result, we show that the information form can be scalably applied to represent model uncertainty in DNNs. Our exhaustive theoretical analysis and empirical evaluations on various benchmarks show the competitiveness of our approach over the current methods.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-lee20b, title = {Estimating Model Uncertainty of Neural Networks in Sparse Information Form}, author = {Lee, Jongseok and Humt, Matthias and Feng, Jianxiang and Triebel, Rudolph}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {5702--5713}, year = {2020}, editor = {III, Hal Daumé and Singh, Aarti}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/lee20b/lee20b.pdf}, url = {https://proceedings.mlr.press/v119/lee20b.html}, abstract = {We present a sparse representation of model uncertainty for Deep Neural Networks (DNNs) where the parameter posterior is approximated with an inverse formulation of the Multivariate Normal Distribution (MND), also known as the information form. The key insight of our work is that the information matrix, i.e. the inverse of the covariance matrix tends to be sparse in its spectrum. Therefore, dimensionality reduction techniques such as low rank approximations (LRA) can be effectively exploited. To achieve this, we develop a novel sparsification algorithm and derive a cost-effective analytical sampler. As a result, we show that the information form can be scalably applied to represent model uncertainty in DNNs. Our exhaustive theoretical analysis and empirical evaluations on various benchmarks show the competitiveness of our approach over the current methods.} }
Endnote
%0 Conference Paper %T Estimating Model Uncertainty of Neural Networks in Sparse Information Form %A Jongseok Lee %A Matthias Humt %A Jianxiang Feng %A Rudolph Triebel %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-lee20b %I PMLR %P 5702--5713 %U https://proceedings.mlr.press/v119/lee20b.html %V 119 %X We present a sparse representation of model uncertainty for Deep Neural Networks (DNNs) where the parameter posterior is approximated with an inverse formulation of the Multivariate Normal Distribution (MND), also known as the information form. The key insight of our work is that the information matrix, i.e. the inverse of the covariance matrix tends to be sparse in its spectrum. Therefore, dimensionality reduction techniques such as low rank approximations (LRA) can be effectively exploited. To achieve this, we develop a novel sparsification algorithm and derive a cost-effective analytical sampler. As a result, we show that the information form can be scalably applied to represent model uncertainty in DNNs. Our exhaustive theoretical analysis and empirical evaluations on various benchmarks show the competitiveness of our approach over the current methods.
APA
Lee, J., Humt, M., Feng, J. & Triebel, R.. (2020). Estimating Model Uncertainty of Neural Networks in Sparse Information Form. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:5702-5713 Available from https://proceedings.mlr.press/v119/lee20b.html.

Related Material