Deep Learning with Limited Numerical Precision

Suyog Gupta, Ankur Agrawal, Kailash Gopalakrishnan, Pritish Narayanan
Proceedings of the 32nd International Conference on Machine Learning, PMLR 37:1737-1746, 2015.

Abstract

Training of large-scale deep neural networks is often constrained by the available computational resources. We study the effect of limited precision data representation and computation on neural network training. Within the context of low-precision fixed-point computations, we observe the rounding scheme to play a crucial role in determining the network’s behavior during training. Our results show that deep networks can be trained using only 16-bit wide fixed-point number representation when using stochastic rounding, and incur little to no degradation in the classification accuracy. We also demonstrate an energy-efficient hardware accelerator that implements low-precision fixed-point arithmetic with stochastic rounding

Cite this Paper


BibTeX
@InProceedings{pmlr-v37-gupta15, title = {Deep Learning with Limited Numerical Precision}, author = {Gupta, Suyog and Agrawal, Ankur and Gopalakrishnan, Kailash and Narayanan, Pritish}, booktitle = {Proceedings of the 32nd International Conference on Machine Learning}, pages = {1737--1746}, year = {2015}, editor = {Bach, Francis and Blei, David}, volume = {37}, series = {Proceedings of Machine Learning Research}, address = {Lille, France}, month = {07--09 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v37/gupta15.pdf}, url = {https://proceedings.mlr.press/v37/gupta15.html}, abstract = {Training of large-scale deep neural networks is often constrained by the available computational resources. We study the effect of limited precision data representation and computation on neural network training. Within the context of low-precision fixed-point computations, we observe the rounding scheme to play a crucial role in determining the network’s behavior during training. Our results show that deep networks can be trained using only 16-bit wide fixed-point number representation when using stochastic rounding, and incur little to no degradation in the classification accuracy. We also demonstrate an energy-efficient hardware accelerator that implements low-precision fixed-point arithmetic with stochastic rounding} }
Endnote
%0 Conference Paper %T Deep Learning with Limited Numerical Precision %A Suyog Gupta %A Ankur Agrawal %A Kailash Gopalakrishnan %A Pritish Narayanan %B Proceedings of the 32nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2015 %E Francis Bach %E David Blei %F pmlr-v37-gupta15 %I PMLR %P 1737--1746 %U https://proceedings.mlr.press/v37/gupta15.html %V 37 %X Training of large-scale deep neural networks is often constrained by the available computational resources. We study the effect of limited precision data representation and computation on neural network training. Within the context of low-precision fixed-point computations, we observe the rounding scheme to play a crucial role in determining the network’s behavior during training. Our results show that deep networks can be trained using only 16-bit wide fixed-point number representation when using stochastic rounding, and incur little to no degradation in the classification accuracy. We also demonstrate an energy-efficient hardware accelerator that implements low-precision fixed-point arithmetic with stochastic rounding
RIS
TY - CPAPER TI - Deep Learning with Limited Numerical Precision AU - Suyog Gupta AU - Ankur Agrawal AU - Kailash Gopalakrishnan AU - Pritish Narayanan BT - Proceedings of the 32nd International Conference on Machine Learning DA - 2015/06/01 ED - Francis Bach ED - David Blei ID - pmlr-v37-gupta15 PB - PMLR DP - Proceedings of Machine Learning Research VL - 37 SP - 1737 EP - 1746 L1 - http://proceedings.mlr.press/v37/gupta15.pdf UR - https://proceedings.mlr.press/v37/gupta15.html AB - Training of large-scale deep neural networks is often constrained by the available computational resources. We study the effect of limited precision data representation and computation on neural network training. Within the context of low-precision fixed-point computations, we observe the rounding scheme to play a crucial role in determining the network’s behavior during training. Our results show that deep networks can be trained using only 16-bit wide fixed-point number representation when using stochastic rounding, and incur little to no degradation in the classification accuracy. We also demonstrate an energy-efficient hardware accelerator that implements low-precision fixed-point arithmetic with stochastic rounding ER -
APA
Gupta, S., Agrawal, A., Gopalakrishnan, K. & Narayanan, P.. (2015). Deep Learning with Limited Numerical Precision. Proceedings of the 32nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 37:1737-1746 Available from https://proceedings.mlr.press/v37/gupta15.html.

Related Material