Adaptive Neural Networks for Efficient Inference

Tolga Bolukbasi, Joseph Wang, Ofer Dekel, Venkatesh Saligrama
Proceedings of the 34th International Conference on Machine Learning, PMLR 70:527-536, 2017.

Abstract

We present an approach to adaptively utilize deep neural networks in order to reduce the evaluation time on new examples without loss of accuracy. Rather than attempting to redesign or approximate existing networks, we propose two schemes that adaptively utilize networks. We first pose an adaptive network evaluation scheme, where we learn a system to adaptively choose the components of a deep network to be evaluated for each example. By allowing examples correctly classified using early layers of the system to exit, we avoid the computational time associated with full evaluation of the network. We extend this to learn a network selection system that adaptively selects the network to be evaluated for each example. We show that computational time can be dramatically reduced by exploiting the fact that many examples can be correctly classified using relatively efficient networks and that complex, computationally costly networks are only necessary for a small fraction of examples. We pose a global objective for learning an adaptive early exit or network selection policy and solve it by reducing the policy learning problem to a layer-by-layer weighted binary classification problem. Empirically, these approaches yield dramatic reductions in computational cost, with up to a 2.8x speedup on state-of-the-art networks from the ImageNet image recognition challenge with minimal ($<1\%$) loss of top5 accuracy.

Cite this Paper


BibTeX
@InProceedings{pmlr-v70-bolukbasi17a, title = {Adaptive Neural Networks for Efficient Inference}, author = {Tolga Bolukbasi and Joseph Wang and Ofer Dekel and Venkatesh Saligrama}, booktitle = {Proceedings of the 34th International Conference on Machine Learning}, pages = {527--536}, year = {2017}, editor = {Precup, Doina and Teh, Yee Whye}, volume = {70}, series = {Proceedings of Machine Learning Research}, month = {06--11 Aug}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v70/bolukbasi17a/bolukbasi17a.pdf}, url = {https://proceedings.mlr.press/v70/bolukbasi17a.html}, abstract = {We present an approach to adaptively utilize deep neural networks in order to reduce the evaluation time on new examples without loss of accuracy. Rather than attempting to redesign or approximate existing networks, we propose two schemes that adaptively utilize networks. We first pose an adaptive network evaluation scheme, where we learn a system to adaptively choose the components of a deep network to be evaluated for each example. By allowing examples correctly classified using early layers of the system to exit, we avoid the computational time associated with full evaluation of the network. We extend this to learn a network selection system that adaptively selects the network to be evaluated for each example. We show that computational time can be dramatically reduced by exploiting the fact that many examples can be correctly classified using relatively efficient networks and that complex, computationally costly networks are only necessary for a small fraction of examples. We pose a global objective for learning an adaptive early exit or network selection policy and solve it by reducing the policy learning problem to a layer-by-layer weighted binary classification problem. Empirically, these approaches yield dramatic reductions in computational cost, with up to a 2.8x speedup on state-of-the-art networks from the ImageNet image recognition challenge with minimal ($<1\%$) loss of top5 accuracy.} }
Endnote
%0 Conference Paper %T Adaptive Neural Networks for Efficient Inference %A Tolga Bolukbasi %A Joseph Wang %A Ofer Dekel %A Venkatesh Saligrama %B Proceedings of the 34th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2017 %E Doina Precup %E Yee Whye Teh %F pmlr-v70-bolukbasi17a %I PMLR %P 527--536 %U https://proceedings.mlr.press/v70/bolukbasi17a.html %V 70 %X We present an approach to adaptively utilize deep neural networks in order to reduce the evaluation time on new examples without loss of accuracy. Rather than attempting to redesign or approximate existing networks, we propose two schemes that adaptively utilize networks. We first pose an adaptive network evaluation scheme, where we learn a system to adaptively choose the components of a deep network to be evaluated for each example. By allowing examples correctly classified using early layers of the system to exit, we avoid the computational time associated with full evaluation of the network. We extend this to learn a network selection system that adaptively selects the network to be evaluated for each example. We show that computational time can be dramatically reduced by exploiting the fact that many examples can be correctly classified using relatively efficient networks and that complex, computationally costly networks are only necessary for a small fraction of examples. We pose a global objective for learning an adaptive early exit or network selection policy and solve it by reducing the policy learning problem to a layer-by-layer weighted binary classification problem. Empirically, these approaches yield dramatic reductions in computational cost, with up to a 2.8x speedup on state-of-the-art networks from the ImageNet image recognition challenge with minimal ($<1\%$) loss of top5 accuracy.
APA
Bolukbasi, T., Wang, J., Dekel, O. & Saligrama, V.. (2017). Adaptive Neural Networks for Efficient Inference. Proceedings of the 34th International Conference on Machine Learning, in Proceedings of Machine Learning Research 70:527-536 Available from https://proceedings.mlr.press/v70/bolukbasi17a.html.

Related Material