MemSR: Training Memory-efficient Lightweight Model for Image Super-Resolution

Kailu Wu, Chung-Kuei Lee, Kaisheng Ma
Proceedings of the 39th International Conference on Machine Learning, PMLR 162:24076-24092, 2022.

Abstract

Methods based on deep neural networks with a massive number of layers and skip-connections have made impressive improvements on single image super-resolution (SISR). The skip-connections in these complex models boost the performance at the cost of a large amount of memory. With the increase of camera resolution from 1 million pixels to 100 million pixels on mobile phones, the memory footprint of these algorithms also increases hundreds of times, which restricts the applicability of these models on memory-limited devices. A plain model consisting of a stack of 3{\texttimes}3 convolutions with ReLU, in contrast, has the highest memory efficiency but poorly performs on super-resolution. This paper aims at calculating a winning initialization from a complex teacher network for a plain student network, which can provide performance comparable to complex models. To this end, we convert the teacher model to an equivalent large plain model and derive the plain student’s initialization. We further improve the student’s performance through initialization-aware feature distillation. Extensive experiments suggest that the proposed method results in a model with a competitive trade-off between accuracy and speed at a much lower memory footprint than other state-of-the-art lightweight approaches.

Cite this Paper


BibTeX
@InProceedings{pmlr-v162-wu22f, title = {{M}em{SR}: Training Memory-efficient Lightweight Model for Image Super-Resolution}, author = {Wu, Kailu and Lee, Chung-Kuei and Ma, Kaisheng}, booktitle = {Proceedings of the 39th International Conference on Machine Learning}, pages = {24076--24092}, year = {2022}, editor = {Chaudhuri, Kamalika and Jegelka, Stefanie and Song, Le and Szepesvari, Csaba and Niu, Gang and Sabato, Sivan}, volume = {162}, series = {Proceedings of Machine Learning Research}, month = {17--23 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v162/wu22f/wu22f.pdf}, url = {https://proceedings.mlr.press/v162/wu22f.html}, abstract = {Methods based on deep neural networks with a massive number of layers and skip-connections have made impressive improvements on single image super-resolution (SISR). The skip-connections in these complex models boost the performance at the cost of a large amount of memory. With the increase of camera resolution from 1 million pixels to 100 million pixels on mobile phones, the memory footprint of these algorithms also increases hundreds of times, which restricts the applicability of these models on memory-limited devices. A plain model consisting of a stack of 3{\texttimes}3 convolutions with ReLU, in contrast, has the highest memory efficiency but poorly performs on super-resolution. This paper aims at calculating a winning initialization from a complex teacher network for a plain student network, which can provide performance comparable to complex models. To this end, we convert the teacher model to an equivalent large plain model and derive the plain student’s initialization. We further improve the student’s performance through initialization-aware feature distillation. Extensive experiments suggest that the proposed method results in a model with a competitive trade-off between accuracy and speed at a much lower memory footprint than other state-of-the-art lightweight approaches.} }
Endnote
%0 Conference Paper %T MemSR: Training Memory-efficient Lightweight Model for Image Super-Resolution %A Kailu Wu %A Chung-Kuei Lee %A Kaisheng Ma %B Proceedings of the 39th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2022 %E Kamalika Chaudhuri %E Stefanie Jegelka %E Le Song %E Csaba Szepesvari %E Gang Niu %E Sivan Sabato %F pmlr-v162-wu22f %I PMLR %P 24076--24092 %U https://proceedings.mlr.press/v162/wu22f.html %V 162 %X Methods based on deep neural networks with a massive number of layers and skip-connections have made impressive improvements on single image super-resolution (SISR). The skip-connections in these complex models boost the performance at the cost of a large amount of memory. With the increase of camera resolution from 1 million pixels to 100 million pixels on mobile phones, the memory footprint of these algorithms also increases hundreds of times, which restricts the applicability of these models on memory-limited devices. A plain model consisting of a stack of 3{\texttimes}3 convolutions with ReLU, in contrast, has the highest memory efficiency but poorly performs on super-resolution. This paper aims at calculating a winning initialization from a complex teacher network for a plain student network, which can provide performance comparable to complex models. To this end, we convert the teacher model to an equivalent large plain model and derive the plain student’s initialization. We further improve the student’s performance through initialization-aware feature distillation. Extensive experiments suggest that the proposed method results in a model with a competitive trade-off between accuracy and speed at a much lower memory footprint than other state-of-the-art lightweight approaches.
APA
Wu, K., Lee, C. & Ma, K.. (2022). MemSR: Training Memory-efficient Lightweight Model for Image Super-Resolution. Proceedings of the 39th International Conference on Machine Learning, in Proceedings of Machine Learning Research 162:24076-24092 Available from https://proceedings.mlr.press/v162/wu22f.html.

Related Material