Scalar Invariant Networks with Zero Bias

Chuqin Geng, Xiaojie Xu, Haolin Ye, Xujie Si
Proceedings of the 2nd NeurIPS Workshop on Symmetry and Geometry in Neural Representations, PMLR 228:145-163, 2024.

Abstract

Just like weights, bias terms are learnable parameters in many popular machine learning models, including neural networks. Biases are believed to enhance the representational power of neural networks, enabling them to tackle various tasks in computer vision. Nevertheless, we argue that biases can be disregarded for some image-related tasks such as image classification, by considering the intrinsic distribution of images in the input space and desired model properties from first principles. Our empirical results suggest that zero-bias neural networks can perform comparably to normal networks for practical image classification tasks. Furthermore, we demonstrate that zero-bias neural networks possess a valuable property known as scalar (multiplicative) invariance. This implies that the network’s predictions remain unchanged even when the contrast of the input image is altered. We further extend the scalar invariance property to more general cases, thereby attaining robustness within specific convex regions of the input space. We believe dropping bias terms can be considered as a geometric prior when designing neural network architecture for image classification, which shares the spirit of adapting convolutions as the translational invariance prior.

Cite this Paper


BibTeX
@InProceedings{pmlr-v228-geng24a, title = {Scalar Invariant Networks with Zero Bias}, author = {Geng, Chuqin and Xu, Xiaojie and Ye, Haolin and Si, Xujie}, booktitle = {Proceedings of the 2nd NeurIPS Workshop on Symmetry and Geometry in Neural Representations}, pages = {145--163}, year = {2024}, editor = {Sanborn, Sophia and Shewmake, Christian and Azeglio, Simone and Miolane, Nina}, volume = {228}, series = {Proceedings of Machine Learning Research}, month = {16 Dec}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v228/main/assets/geng24a/geng24a.pdf}, url = {https://proceedings.mlr.press/v228/geng24a.html}, abstract = {Just like weights, bias terms are learnable parameters in many popular machine learning models, including neural networks. Biases are believed to enhance the representational power of neural networks, enabling them to tackle various tasks in computer vision. Nevertheless, we argue that biases can be disregarded for some image-related tasks such as image classification, by considering the intrinsic distribution of images in the input space and desired model properties from first principles. Our empirical results suggest that zero-bias neural networks can perform comparably to normal networks for practical image classification tasks. Furthermore, we demonstrate that zero-bias neural networks possess a valuable property known as scalar (multiplicative) invariance. This implies that the network’s predictions remain unchanged even when the contrast of the input image is altered. We further extend the scalar invariance property to more general cases, thereby attaining robustness within specific convex regions of the input space. We believe dropping bias terms can be considered as a geometric prior when designing neural network architecture for image classification, which shares the spirit of adapting convolutions as the translational invariance prior.} }
Endnote
%0 Conference Paper %T Scalar Invariant Networks with Zero Bias %A Chuqin Geng %A Xiaojie Xu %A Haolin Ye %A Xujie Si %B Proceedings of the 2nd NeurIPS Workshop on Symmetry and Geometry in Neural Representations %C Proceedings of Machine Learning Research %D 2024 %E Sophia Sanborn %E Christian Shewmake %E Simone Azeglio %E Nina Miolane %F pmlr-v228-geng24a %I PMLR %P 145--163 %U https://proceedings.mlr.press/v228/geng24a.html %V 228 %X Just like weights, bias terms are learnable parameters in many popular machine learning models, including neural networks. Biases are believed to enhance the representational power of neural networks, enabling them to tackle various tasks in computer vision. Nevertheless, we argue that biases can be disregarded for some image-related tasks such as image classification, by considering the intrinsic distribution of images in the input space and desired model properties from first principles. Our empirical results suggest that zero-bias neural networks can perform comparably to normal networks for practical image classification tasks. Furthermore, we demonstrate that zero-bias neural networks possess a valuable property known as scalar (multiplicative) invariance. This implies that the network’s predictions remain unchanged even when the contrast of the input image is altered. We further extend the scalar invariance property to more general cases, thereby attaining robustness within specific convex regions of the input space. We believe dropping bias terms can be considered as a geometric prior when designing neural network architecture for image classification, which shares the spirit of adapting convolutions as the translational invariance prior.
APA
Geng, C., Xu, X., Ye, H. & Si, X.. (2024). Scalar Invariant Networks with Zero Bias. Proceedings of the 2nd NeurIPS Workshop on Symmetry and Geometry in Neural Representations, in Proceedings of Machine Learning Research 228:145-163 Available from https://proceedings.mlr.press/v228/geng24a.html.

Related Material