InBiaseD: Inductive Bias Distillation to Improve Generalization and Robustness through Shape-awareness

Shruthi Gowda, Bahram Zonooz, Elahe Arani
Proceedings of The 1st Conference on Lifelong Learning Agents, PMLR 199:1026-1042, 2022.

Abstract

Humans rely less on spurious correlations and trivial cues, such as texture, compared to deep neural networks which lead to better generalization and robustness. It can be attributed to the prior knowledge or the high-level cognitive inductive bias present in the brain. Therefore, introducing meaningful inductive bias to neural networks can help learn more generic and high-level representations and alleviate some of the shortcomings. We propose InBiaseD to distill inductive bias and bring shape-awareness to the neural networks. Our method includes a bias alignment objective that enforces the networks to learn more generic representations that are less vulnerable to unintended cues in the data which results in improved generalization performance. InBiaseD is less susceptible to shortcut learning and also exhibits lower texture bias. The better representations also aid in improving robustness to adversarial attacks and we hence plugin InBiaseD seamlessly into the existing adversarial training schemes to show a better trade-off between generalization and robustness.

Cite this Paper


BibTeX
@InProceedings{pmlr-v199-gowda22a, title = {InBiaseD: Inductive Bias Distillation to Improve Generalization and Robustness through Shape-awareness}, author = {Gowda, Shruthi and Zonooz, Bahram and Arani, Elahe}, booktitle = {Proceedings of The 1st Conference on Lifelong Learning Agents}, pages = {1026--1042}, year = {2022}, editor = {Chandar, Sarath and Pascanu, Razvan and Precup, Doina}, volume = {199}, series = {Proceedings of Machine Learning Research}, month = {22--24 Aug}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v199/gowda22a/gowda22a.pdf}, url = {https://proceedings.mlr.press/v199/gowda22a.html}, abstract = {Humans rely less on spurious correlations and trivial cues, such as texture, compared to deep neural networks which lead to better generalization and robustness. It can be attributed to the prior knowledge or the high-level cognitive inductive bias present in the brain. Therefore, introducing meaningful inductive bias to neural networks can help learn more generic and high-level representations and alleviate some of the shortcomings. We propose InBiaseD to distill inductive bias and bring shape-awareness to the neural networks. Our method includes a bias alignment objective that enforces the networks to learn more generic representations that are less vulnerable to unintended cues in the data which results in improved generalization performance. InBiaseD is less susceptible to shortcut learning and also exhibits lower texture bias. The better representations also aid in improving robustness to adversarial attacks and we hence plugin InBiaseD seamlessly into the existing adversarial training schemes to show a better trade-off between generalization and robustness.} }
Endnote
%0 Conference Paper %T InBiaseD: Inductive Bias Distillation to Improve Generalization and Robustness through Shape-awareness %A Shruthi Gowda %A Bahram Zonooz %A Elahe Arani %B Proceedings of The 1st Conference on Lifelong Learning Agents %C Proceedings of Machine Learning Research %D 2022 %E Sarath Chandar %E Razvan Pascanu %E Doina Precup %F pmlr-v199-gowda22a %I PMLR %P 1026--1042 %U https://proceedings.mlr.press/v199/gowda22a.html %V 199 %X Humans rely less on spurious correlations and trivial cues, such as texture, compared to deep neural networks which lead to better generalization and robustness. It can be attributed to the prior knowledge or the high-level cognitive inductive bias present in the brain. Therefore, introducing meaningful inductive bias to neural networks can help learn more generic and high-level representations and alleviate some of the shortcomings. We propose InBiaseD to distill inductive bias and bring shape-awareness to the neural networks. Our method includes a bias alignment objective that enforces the networks to learn more generic representations that are less vulnerable to unintended cues in the data which results in improved generalization performance. InBiaseD is less susceptible to shortcut learning and also exhibits lower texture bias. The better representations also aid in improving robustness to adversarial attacks and we hence plugin InBiaseD seamlessly into the existing adversarial training schemes to show a better trade-off between generalization and robustness.
APA
Gowda, S., Zonooz, B. & Arani, E.. (2022). InBiaseD: Inductive Bias Distillation to Improve Generalization and Robustness through Shape-awareness. Proceedings of The 1st Conference on Lifelong Learning Agents, in Proceedings of Machine Learning Research 199:1026-1042 Available from https://proceedings.mlr.press/v199/gowda22a.html.

Related Material