Normalization Propagation: A Parametric Technique for Removing Internal Covariate Shift in Deep Networks

Devansh Arpit, Yingbo Zhou, Bhargava Kota, Venu Govindaraju
Proceedings of The 33rd International Conference on Machine Learning, PMLR 48:1168-1176, 2016.

Abstract

While the authors of Batch Normalization (BN) identify and address an important problem involved in training deep networks– \textitInternal Covariate Shift– the current solution has certain drawbacks. For instance, BN depends on batch statistics for layerwise input normalization during training which makes the estimates of mean and standard deviation of input (distribution) to hidden layers inaccurate due to shifting parameter values (especially during initial training epochs). Another fundamental problem with BN is that it cannot be used with batch-size 1 during training. We address these drawbacks of BN by proposing a non-adaptive normalization technique for removing covariate shift, that we call \textitNormalization Propagation. Our approach does not depend on batch statistics, but rather uses a data-independent parametric estimate of mean and standard-deviation in every layer thus being computationally faster compared with BN. We exploit the observation that the pre-activation before Rectified Linear Units follow Gaussian distribution in deep networks, and that once the first and second order statistics of any given dataset are normalized, we can forward propagate this normalization without the need for recalculating the approximate statistics for hidden layers.

Cite this Paper


BibTeX
@InProceedings{pmlr-v48-arpitb16, title = {Normalization Propagation: A Parametric Technique for Removing Internal Covariate Shift in Deep Networks}, author = {Arpit, Devansh and Zhou, Yingbo and Kota, Bhargava and Govindaraju, Venu}, booktitle = {Proceedings of The 33rd International Conference on Machine Learning}, pages = {1168--1176}, year = {2016}, editor = {Balcan, Maria Florina and Weinberger, Kilian Q.}, volume = {48}, series = {Proceedings of Machine Learning Research}, address = {New York, New York, USA}, month = {20--22 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v48/arpitb16.pdf}, url = {https://proceedings.mlr.press/v48/arpitb16.html}, abstract = {While the authors of Batch Normalization (BN) identify and address an important problem involved in training deep networks– \textitInternal Covariate Shift– the current solution has certain drawbacks. For instance, BN depends on batch statistics for layerwise input normalization during training which makes the estimates of mean and standard deviation of input (distribution) to hidden layers inaccurate due to shifting parameter values (especially during initial training epochs). Another fundamental problem with BN is that it cannot be used with batch-size 1 during training. We address these drawbacks of BN by proposing a non-adaptive normalization technique for removing covariate shift, that we call \textitNormalization Propagation. Our approach does not depend on batch statistics, but rather uses a data-independent parametric estimate of mean and standard-deviation in every layer thus being computationally faster compared with BN. We exploit the observation that the pre-activation before Rectified Linear Units follow Gaussian distribution in deep networks, and that once the first and second order statistics of any given dataset are normalized, we can forward propagate this normalization without the need for recalculating the approximate statistics for hidden layers.} }
Endnote
%0 Conference Paper %T Normalization Propagation: A Parametric Technique for Removing Internal Covariate Shift in Deep Networks %A Devansh Arpit %A Yingbo Zhou %A Bhargava Kota %A Venu Govindaraju %B Proceedings of The 33rd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2016 %E Maria Florina Balcan %E Kilian Q. Weinberger %F pmlr-v48-arpitb16 %I PMLR %P 1168--1176 %U https://proceedings.mlr.press/v48/arpitb16.html %V 48 %X While the authors of Batch Normalization (BN) identify and address an important problem involved in training deep networks– \textitInternal Covariate Shift– the current solution has certain drawbacks. For instance, BN depends on batch statistics for layerwise input normalization during training which makes the estimates of mean and standard deviation of input (distribution) to hidden layers inaccurate due to shifting parameter values (especially during initial training epochs). Another fundamental problem with BN is that it cannot be used with batch-size 1 during training. We address these drawbacks of BN by proposing a non-adaptive normalization technique for removing covariate shift, that we call \textitNormalization Propagation. Our approach does not depend on batch statistics, but rather uses a data-independent parametric estimate of mean and standard-deviation in every layer thus being computationally faster compared with BN. We exploit the observation that the pre-activation before Rectified Linear Units follow Gaussian distribution in deep networks, and that once the first and second order statistics of any given dataset are normalized, we can forward propagate this normalization without the need for recalculating the approximate statistics for hidden layers.
RIS
TY - CPAPER TI - Normalization Propagation: A Parametric Technique for Removing Internal Covariate Shift in Deep Networks AU - Devansh Arpit AU - Yingbo Zhou AU - Bhargava Kota AU - Venu Govindaraju BT - Proceedings of The 33rd International Conference on Machine Learning DA - 2016/06/11 ED - Maria Florina Balcan ED - Kilian Q. Weinberger ID - pmlr-v48-arpitb16 PB - PMLR DP - Proceedings of Machine Learning Research VL - 48 SP - 1168 EP - 1176 L1 - http://proceedings.mlr.press/v48/arpitb16.pdf UR - https://proceedings.mlr.press/v48/arpitb16.html AB - While the authors of Batch Normalization (BN) identify and address an important problem involved in training deep networks– \textitInternal Covariate Shift– the current solution has certain drawbacks. For instance, BN depends on batch statistics for layerwise input normalization during training which makes the estimates of mean and standard deviation of input (distribution) to hidden layers inaccurate due to shifting parameter values (especially during initial training epochs). Another fundamental problem with BN is that it cannot be used with batch-size 1 during training. We address these drawbacks of BN by proposing a non-adaptive normalization technique for removing covariate shift, that we call \textitNormalization Propagation. Our approach does not depend on batch statistics, but rather uses a data-independent parametric estimate of mean and standard-deviation in every layer thus being computationally faster compared with BN. We exploit the observation that the pre-activation before Rectified Linear Units follow Gaussian distribution in deep networks, and that once the first and second order statistics of any given dataset are normalized, we can forward propagate this normalization without the need for recalculating the approximate statistics for hidden layers. ER -
APA
Arpit, D., Zhou, Y., Kota, B. & Govindaraju, V.. (2016). Normalization Propagation: A Parametric Technique for Removing Internal Covariate Shift in Deep Networks. Proceedings of The 33rd International Conference on Machine Learning, in Proceedings of Machine Learning Research 48:1168-1176 Available from https://proceedings.mlr.press/v48/arpitb16.html.

Related Material