Shallow-Deep Networks: Understanding and Mitigating Network Overthinking

Yigitcan Kaya, Sanghyun Hong, Tudor Dumitras
Proceedings of the 36th International Conference on Machine Learning, PMLR 97:3301-3310, 2019.

Abstract

We characterize a prevalent weakness of deep neural networks (DNNs), ’overthinking’, which occurs when a DNN can reach correct predictions before its final layer. Overthinking is computationally wasteful, and it can also be destructive when, by the final layer, a correct prediction changes into a misclassification. Understanding overthinking requires studying how each prediction evolves during a DNN’s forward pass, which conventionally is opaque. For prediction transparency, we propose the Shallow-Deep Network (SDN), a generic modification to off-the-shelf DNNs that introduces internal classifiers. We apply SDN to four modern architectures, trained on three image classification tasks, to characterize the overthinking problem. We show that SDNs can mitigate the wasteful effect of overthinking with confidence-based early exits, which reduce the average inference cost by more than 50% and preserve the accuracy. We also find that the destructive effect occurs for 50% of misclassifications on natural inputs and that it can be induced, adversarially, with a recent backdooring attack. To mitigate this effect, we propose a new confusion metric to quantify the internal disagreements that will likely to lead to misclassifications.

Cite this Paper


BibTeX
@InProceedings{pmlr-v97-kaya19a, title = {Shallow-Deep Networks: Understanding and Mitigating Network Overthinking}, author = {Kaya, Yigitcan and Hong, Sanghyun and Dumitras, Tudor}, booktitle = {Proceedings of the 36th International Conference on Machine Learning}, pages = {3301--3310}, year = {2019}, editor = {Chaudhuri, Kamalika and Salakhutdinov, Ruslan}, volume = {97}, series = {Proceedings of Machine Learning Research}, month = {09--15 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v97/kaya19a/kaya19a.pdf}, url = {https://proceedings.mlr.press/v97/kaya19a.html}, abstract = {We characterize a prevalent weakness of deep neural networks (DNNs), ’overthinking’, which occurs when a DNN can reach correct predictions before its final layer. Overthinking is computationally wasteful, and it can also be destructive when, by the final layer, a correct prediction changes into a misclassification. Understanding overthinking requires studying how each prediction evolves during a DNN’s forward pass, which conventionally is opaque. For prediction transparency, we propose the Shallow-Deep Network (SDN), a generic modification to off-the-shelf DNNs that introduces internal classifiers. We apply SDN to four modern architectures, trained on three image classification tasks, to characterize the overthinking problem. We show that SDNs can mitigate the wasteful effect of overthinking with confidence-based early exits, which reduce the average inference cost by more than 50% and preserve the accuracy. We also find that the destructive effect occurs for 50% of misclassifications on natural inputs and that it can be induced, adversarially, with a recent backdooring attack. To mitigate this effect, we propose a new confusion metric to quantify the internal disagreements that will likely to lead to misclassifications.} }
Endnote
%0 Conference Paper %T Shallow-Deep Networks: Understanding and Mitigating Network Overthinking %A Yigitcan Kaya %A Sanghyun Hong %A Tudor Dumitras %B Proceedings of the 36th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2019 %E Kamalika Chaudhuri %E Ruslan Salakhutdinov %F pmlr-v97-kaya19a %I PMLR %P 3301--3310 %U https://proceedings.mlr.press/v97/kaya19a.html %V 97 %X We characterize a prevalent weakness of deep neural networks (DNNs), ’overthinking’, which occurs when a DNN can reach correct predictions before its final layer. Overthinking is computationally wasteful, and it can also be destructive when, by the final layer, a correct prediction changes into a misclassification. Understanding overthinking requires studying how each prediction evolves during a DNN’s forward pass, which conventionally is opaque. For prediction transparency, we propose the Shallow-Deep Network (SDN), a generic modification to off-the-shelf DNNs that introduces internal classifiers. We apply SDN to four modern architectures, trained on three image classification tasks, to characterize the overthinking problem. We show that SDNs can mitigate the wasteful effect of overthinking with confidence-based early exits, which reduce the average inference cost by more than 50% and preserve the accuracy. We also find that the destructive effect occurs for 50% of misclassifications on natural inputs and that it can be induced, adversarially, with a recent backdooring attack. To mitigate this effect, we propose a new confusion metric to quantify the internal disagreements that will likely to lead to misclassifications.
APA
Kaya, Y., Hong, S. & Dumitras, T.. (2019). Shallow-Deep Networks: Understanding and Mitigating Network Overthinking. Proceedings of the 36th International Conference on Machine Learning, in Proceedings of Machine Learning Research 97:3301-3310 Available from https://proceedings.mlr.press/v97/kaya19a.html.

Related Material