Divide and Conquer: Leveraging Intermediate Feature Representations for Quantized Training of Neural Networks

Ahmed Taha Elthakeb, Prannoy Pilligundla, Fatemeh Mireshghallah, Alexander Cloninger, Hadi Esmaeilzadeh
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:2880-2891, 2020.

Abstract

The deep layers of modern neural networks extract a rather rich set of features as an input propagates through the network, this paper sets out to harvest these rich intermediate representations for quantization with minimal accuracy loss while significantly reducing the memory footprint and compute intensity of the DNN. This paper utilizes knowledge distillation through teacher-student paradigm (Hinton et al., 2015) in a novel setting that exploits the feature extraction capability of DNNs for higher accuracy quantization. As such, our algorithm logically divides a pretrained full-precision DNN to multiple sections, each of which exposes intermediate features to train a team of students independently in the quantized domain and simply stitching them afterwards. This divide and conquer strategy, makes the training of each student section possible in isolation, speeding up training by enabling parallelization. Experiments on various DNNs (AlexNet, LeNet, MobileNet, ResNet-18, ResNet-20, SVHN and VGG-11) show that, this approach{—}called DCQ (Divide and Conquer Quantization){—}on average, improves the performance of a state-of-the-art quantized training technique, DoReFa-Net (Zhou et al., 2016) by 21.6% and 9.3% for binary and ternary quantization, respectively. Additionally, we show that incorporating DCQ to existing quantized training methods leads to improved accuracies as compared to previously reported by multiple state-of-the-art quantized training methods.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-elthakeb20a, title = {Divide and Conquer: Leveraging Intermediate Feature Representations for Quantized Training of Neural Networks}, author = {Elthakeb, Ahmed Taha and Pilligundla, Prannoy and Mireshghallah, Fatemeh and Cloninger, Alexander and Esmaeilzadeh, Hadi}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {2880--2891}, year = {2020}, editor = {III, Hal Daumé and Singh, Aarti}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/elthakeb20a/elthakeb20a.pdf}, url = {https://proceedings.mlr.press/v119/elthakeb20a.html}, abstract = {The deep layers of modern neural networks extract a rather rich set of features as an input propagates through the network, this paper sets out to harvest these rich intermediate representations for quantization with minimal accuracy loss while significantly reducing the memory footprint and compute intensity of the DNN. This paper utilizes knowledge distillation through teacher-student paradigm (Hinton et al., 2015) in a novel setting that exploits the feature extraction capability of DNNs for higher accuracy quantization. As such, our algorithm logically divides a pretrained full-precision DNN to multiple sections, each of which exposes intermediate features to train a team of students independently in the quantized domain and simply stitching them afterwards. This divide and conquer strategy, makes the training of each student section possible in isolation, speeding up training by enabling parallelization. Experiments on various DNNs (AlexNet, LeNet, MobileNet, ResNet-18, ResNet-20, SVHN and VGG-11) show that, this approach{—}called DCQ (Divide and Conquer Quantization){—}on average, improves the performance of a state-of-the-art quantized training technique, DoReFa-Net (Zhou et al., 2016) by 21.6% and 9.3% for binary and ternary quantization, respectively. Additionally, we show that incorporating DCQ to existing quantized training methods leads to improved accuracies as compared to previously reported by multiple state-of-the-art quantized training methods.} }
Endnote
%0 Conference Paper %T Divide and Conquer: Leveraging Intermediate Feature Representations for Quantized Training of Neural Networks %A Ahmed Taha Elthakeb %A Prannoy Pilligundla %A Fatemeh Mireshghallah %A Alexander Cloninger %A Hadi Esmaeilzadeh %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-elthakeb20a %I PMLR %P 2880--2891 %U https://proceedings.mlr.press/v119/elthakeb20a.html %V 119 %X The deep layers of modern neural networks extract a rather rich set of features as an input propagates through the network, this paper sets out to harvest these rich intermediate representations for quantization with minimal accuracy loss while significantly reducing the memory footprint and compute intensity of the DNN. This paper utilizes knowledge distillation through teacher-student paradigm (Hinton et al., 2015) in a novel setting that exploits the feature extraction capability of DNNs for higher accuracy quantization. As such, our algorithm logically divides a pretrained full-precision DNN to multiple sections, each of which exposes intermediate features to train a team of students independently in the quantized domain and simply stitching them afterwards. This divide and conquer strategy, makes the training of each student section possible in isolation, speeding up training by enabling parallelization. Experiments on various DNNs (AlexNet, LeNet, MobileNet, ResNet-18, ResNet-20, SVHN and VGG-11) show that, this approach{—}called DCQ (Divide and Conquer Quantization){—}on average, improves the performance of a state-of-the-art quantized training technique, DoReFa-Net (Zhou et al., 2016) by 21.6% and 9.3% for binary and ternary quantization, respectively. Additionally, we show that incorporating DCQ to existing quantized training methods leads to improved accuracies as compared to previously reported by multiple state-of-the-art quantized training methods.
APA
Elthakeb, A.T., Pilligundla, P., Mireshghallah, F., Cloninger, A. & Esmaeilzadeh, H.. (2020). Divide and Conquer: Leveraging Intermediate Feature Representations for Quantized Training of Neural Networks. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:2880-2891 Available from https://proceedings.mlr.press/v119/elthakeb20a.html.

Related Material