Scale-Invariant Recognition by Weight-Shared CNNs in Parallel

Ryo Takahashi, Takashi Matsubara, Kuniaki Uehara
Proceedings of the Ninth Asian Conference on Machine Learning, PMLR 77:295-310, 2017.

Abstract

Deep convolutional neural networks (CNNs) have become one of the most successful methods for image processing tasks in past few years. Recent studies on modern residual architectures, enabling CNNs to be much deeper, have achieved much better results thanks to their high expressive ability by numerous parameters. In general, CNNs are known to have the robustness to the small parallel shift of objects in images by their local receptive fields, weight parameters shared by each unit, and pooling layers sandwiching them. However, CNNs have a limited robustness to the other geometric transformations such as scaling and rotation, and this lack becomes an obstacle to performance improvement even now. This paper proposes a novel network architecture, the \emphweight-shared multi-stage network (WSMS-Net), and focuses on acquiring the scale invariance by constructing of multiple stages of CNNs. The WSMS-Net is easily combined with existing deep CNNs, enables existing deep CNNs to acquire a robustness to the scaling, and therefore, achieves higher classification accuracy on CIFAR-10, CIFAR-100 and ImageNet datasets.

Cite this Paper


BibTeX
@InProceedings{pmlr-v77-takahashi17a, title = {Scale-Invariant Recognition by Weight-Shared CNNs in Parallel}, author = {Takahashi, Ryo and Matsubara, Takashi and Uehara, Kuniaki}, booktitle = {Proceedings of the Ninth Asian Conference on Machine Learning}, pages = {295--310}, year = {2017}, editor = {Zhang, Min-Ling and Noh, Yung-Kyun}, volume = {77}, series = {Proceedings of Machine Learning Research}, address = {Yonsei University, Seoul, Republic of Korea}, month = {15--17 Nov}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v77/takahashi17a/takahashi17a.pdf}, url = {https://proceedings.mlr.press/v77/takahashi17a.html}, abstract = {Deep convolutional neural networks (CNNs) have become one of the most successful methods for image processing tasks in past few years. Recent studies on modern residual architectures, enabling CNNs to be much deeper, have achieved much better results thanks to their high expressive ability by numerous parameters. In general, CNNs are known to have the robustness to the small parallel shift of objects in images by their local receptive fields, weight parameters shared by each unit, and pooling layers sandwiching them. However, CNNs have a limited robustness to the other geometric transformations such as scaling and rotation, and this lack becomes an obstacle to performance improvement even now. This paper proposes a novel network architecture, the \emphweight-shared multi-stage network (WSMS-Net), and focuses on acquiring the scale invariance by constructing of multiple stages of CNNs. The WSMS-Net is easily combined with existing deep CNNs, enables existing deep CNNs to acquire a robustness to the scaling, and therefore, achieves higher classification accuracy on CIFAR-10, CIFAR-100 and ImageNet datasets.} }
Endnote
%0 Conference Paper %T Scale-Invariant Recognition by Weight-Shared CNNs in Parallel %A Ryo Takahashi %A Takashi Matsubara %A Kuniaki Uehara %B Proceedings of the Ninth Asian Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2017 %E Min-Ling Zhang %E Yung-Kyun Noh %F pmlr-v77-takahashi17a %I PMLR %P 295--310 %U https://proceedings.mlr.press/v77/takahashi17a.html %V 77 %X Deep convolutional neural networks (CNNs) have become one of the most successful methods for image processing tasks in past few years. Recent studies on modern residual architectures, enabling CNNs to be much deeper, have achieved much better results thanks to their high expressive ability by numerous parameters. In general, CNNs are known to have the robustness to the small parallel shift of objects in images by their local receptive fields, weight parameters shared by each unit, and pooling layers sandwiching them. However, CNNs have a limited robustness to the other geometric transformations such as scaling and rotation, and this lack becomes an obstacle to performance improvement even now. This paper proposes a novel network architecture, the \emphweight-shared multi-stage network (WSMS-Net), and focuses on acquiring the scale invariance by constructing of multiple stages of CNNs. The WSMS-Net is easily combined with existing deep CNNs, enables existing deep CNNs to acquire a robustness to the scaling, and therefore, achieves higher classification accuracy on CIFAR-10, CIFAR-100 and ImageNet datasets.
APA
Takahashi, R., Matsubara, T. & Uehara, K.. (2017). Scale-Invariant Recognition by Weight-Shared CNNs in Parallel. Proceedings of the Ninth Asian Conference on Machine Learning, in Proceedings of Machine Learning Research 77:295-310 Available from https://proceedings.mlr.press/v77/takahashi17a.html.

Related Material