Improving Robustness via Tilted Exponential Layer: A Communication-Theoretic Perspective

Bhagyashree Puranik, Ahmad Beirami, Yao Qin, Upamanyu Madhow
Proceedings of The 27th International Conference on Artificial Intelligence and Statistics, PMLR 238:4510-4518, 2024.

Abstract

State-of-the-art techniques for enhancing robustness of deep networks mostly rely on empirical risk minimization with suitable data augmentation. In this paper, we propose a complementary approach motivated by communication theory, aimed at enhancing the signal-to-noise ratio at the output of a neural network layer via neural competition during learning and inference. In addition to standard empirical risk minimization, neurons compete to sparsely represent layer inputs by maximization of a tilted exponential (TEXP) objective function for the layer. TEXP learning can be interpreted as maximum likelihood estimation of matched filters under a Gaussian model for data noise. Inference in a TEXP layer is accomplished by replacing batch norm by a tilted softmax, which can be interpreted as computation of posterior probabilities for the competing signaling hypotheses represented by each neuron. After providing insights via simplified models, we show, by experimentation on standard image datasets, that TEXP learning and inference enhances robustness against noise and other common corruptions, without requiring data augmentation. Further cumulative gains in robustness against this array of distortions can be obtained by appropriately combining TEXP with data augmentation techniques. The code for all our experiments is available at \url{https://github.com/bhagyapuranik/texp_for_robustness}.

Cite this Paper


BibTeX
@InProceedings{pmlr-v238-puranik24a, title = { Improving Robustness via Tilted Exponential Layer: A Communication-Theoretic Perspective }, author = {Puranik, Bhagyashree and Beirami, Ahmad and Qin, Yao and Madhow, Upamanyu}, booktitle = {Proceedings of The 27th International Conference on Artificial Intelligence and Statistics}, pages = {4510--4518}, year = {2024}, editor = {Dasgupta, Sanjoy and Mandt, Stephan and Li, Yingzhen}, volume = {238}, series = {Proceedings of Machine Learning Research}, month = {02--04 May}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v238/puranik24a/puranik24a.pdf}, url = {https://proceedings.mlr.press/v238/puranik24a.html}, abstract = { State-of-the-art techniques for enhancing robustness of deep networks mostly rely on empirical risk minimization with suitable data augmentation. In this paper, we propose a complementary approach motivated by communication theory, aimed at enhancing the signal-to-noise ratio at the output of a neural network layer via neural competition during learning and inference. In addition to standard empirical risk minimization, neurons compete to sparsely represent layer inputs by maximization of a tilted exponential (TEXP) objective function for the layer. TEXP learning can be interpreted as maximum likelihood estimation of matched filters under a Gaussian model for data noise. Inference in a TEXP layer is accomplished by replacing batch norm by a tilted softmax, which can be interpreted as computation of posterior probabilities for the competing signaling hypotheses represented by each neuron. After providing insights via simplified models, we show, by experimentation on standard image datasets, that TEXP learning and inference enhances robustness against noise and other common corruptions, without requiring data augmentation. Further cumulative gains in robustness against this array of distortions can be obtained by appropriately combining TEXP with data augmentation techniques. The code for all our experiments is available at \url{https://github.com/bhagyapuranik/texp_for_robustness}. } }
Endnote
%0 Conference Paper %T Improving Robustness via Tilted Exponential Layer: A Communication-Theoretic Perspective %A Bhagyashree Puranik %A Ahmad Beirami %A Yao Qin %A Upamanyu Madhow %B Proceedings of The 27th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2024 %E Sanjoy Dasgupta %E Stephan Mandt %E Yingzhen Li %F pmlr-v238-puranik24a %I PMLR %P 4510--4518 %U https://proceedings.mlr.press/v238/puranik24a.html %V 238 %X State-of-the-art techniques for enhancing robustness of deep networks mostly rely on empirical risk minimization with suitable data augmentation. In this paper, we propose a complementary approach motivated by communication theory, aimed at enhancing the signal-to-noise ratio at the output of a neural network layer via neural competition during learning and inference. In addition to standard empirical risk minimization, neurons compete to sparsely represent layer inputs by maximization of a tilted exponential (TEXP) objective function for the layer. TEXP learning can be interpreted as maximum likelihood estimation of matched filters under a Gaussian model for data noise. Inference in a TEXP layer is accomplished by replacing batch norm by a tilted softmax, which can be interpreted as computation of posterior probabilities for the competing signaling hypotheses represented by each neuron. After providing insights via simplified models, we show, by experimentation on standard image datasets, that TEXP learning and inference enhances robustness against noise and other common corruptions, without requiring data augmentation. Further cumulative gains in robustness against this array of distortions can be obtained by appropriately combining TEXP with data augmentation techniques. The code for all our experiments is available at \url{https://github.com/bhagyapuranik/texp_for_robustness}.
APA
Puranik, B., Beirami, A., Qin, Y. & Madhow, U.. (2024). Improving Robustness via Tilted Exponential Layer: A Communication-Theoretic Perspective . Proceedings of The 27th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 238:4510-4518 Available from https://proceedings.mlr.press/v238/puranik24a.html.

Related Material