Exploiting the Potential of Standard Convolutional Autoencoders for Image Restoration by Evolutionary Search

Masanori Suganuma, Mete Ozay, Takayuki Okatani
Proceedings of the 35th International Conference on Machine Learning, PMLR 80:4771-4780, 2018.

Abstract

Researchers have applied deep neural networks to image restoration tasks, in which they proposed various network architectures, loss functions, and training methods. In particular, adversarial training, which is employed in recent studies, seems to be a key ingredient to success. In this paper, we show that simple convolutional autoencoders (CAEs) built upon only standard network components, i.e., convolutional layers and skip connections, can outperform the state-of-the-art methods which employ adversarial training and sophisticated loss functions. The secret is to search for good architectures using an evolutionary algorithm. All we did was to train the optimized CAEs by minimizing the l2 loss between reconstructed images and their ground truths using the ADAM optimizer. Our experimental results show that this approach achieves 27.8 dB peak signal to noise ratio (PSNR) on the CelebA dataset and 33.3 dB on the SVHN dataset, compared to 22.8 dB and 19.0 dB provided by the former state-of-the-art methods, respectively.

Cite this Paper


BibTeX
@InProceedings{pmlr-v80-suganuma18a, title = {Exploiting the Potential of Standard Convolutional Autoencoders for Image Restoration by Evolutionary Search}, author = {Suganuma, Masanori and Ozay, Mete and Okatani, Takayuki}, booktitle = {Proceedings of the 35th International Conference on Machine Learning}, pages = {4771--4780}, year = {2018}, editor = {Dy, Jennifer and Krause, Andreas}, volume = {80}, series = {Proceedings of Machine Learning Research}, month = {10--15 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v80/suganuma18a/suganuma18a.pdf}, url = {https://proceedings.mlr.press/v80/suganuma18a.html}, abstract = {Researchers have applied deep neural networks to image restoration tasks, in which they proposed various network architectures, loss functions, and training methods. In particular, adversarial training, which is employed in recent studies, seems to be a key ingredient to success. In this paper, we show that simple convolutional autoencoders (CAEs) built upon only standard network components, i.e., convolutional layers and skip connections, can outperform the state-of-the-art methods which employ adversarial training and sophisticated loss functions. The secret is to search for good architectures using an evolutionary algorithm. All we did was to train the optimized CAEs by minimizing the l2 loss between reconstructed images and their ground truths using the ADAM optimizer. Our experimental results show that this approach achieves 27.8 dB peak signal to noise ratio (PSNR) on the CelebA dataset and 33.3 dB on the SVHN dataset, compared to 22.8 dB and 19.0 dB provided by the former state-of-the-art methods, respectively.} }
Endnote
%0 Conference Paper %T Exploiting the Potential of Standard Convolutional Autoencoders for Image Restoration by Evolutionary Search %A Masanori Suganuma %A Mete Ozay %A Takayuki Okatani %B Proceedings of the 35th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2018 %E Jennifer Dy %E Andreas Krause %F pmlr-v80-suganuma18a %I PMLR %P 4771--4780 %U https://proceedings.mlr.press/v80/suganuma18a.html %V 80 %X Researchers have applied deep neural networks to image restoration tasks, in which they proposed various network architectures, loss functions, and training methods. In particular, adversarial training, which is employed in recent studies, seems to be a key ingredient to success. In this paper, we show that simple convolutional autoencoders (CAEs) built upon only standard network components, i.e., convolutional layers and skip connections, can outperform the state-of-the-art methods which employ adversarial training and sophisticated loss functions. The secret is to search for good architectures using an evolutionary algorithm. All we did was to train the optimized CAEs by minimizing the l2 loss between reconstructed images and their ground truths using the ADAM optimizer. Our experimental results show that this approach achieves 27.8 dB peak signal to noise ratio (PSNR) on the CelebA dataset and 33.3 dB on the SVHN dataset, compared to 22.8 dB and 19.0 dB provided by the former state-of-the-art methods, respectively.
APA
Suganuma, M., Ozay, M. & Okatani, T.. (2018). Exploiting the Potential of Standard Convolutional Autoencoders for Image Restoration by Evolutionary Search. Proceedings of the 35th International Conference on Machine Learning, in Proceedings of Machine Learning Research 80:4771-4780 Available from https://proceedings.mlr.press/v80/suganuma18a.html.

Related Material