Intermediate Layer Optimization for Inverse Problems using Deep Generative Models

Giannis Daras, Joseph Dean, Ajil Jalal, Alex Dimakis
Proceedings of the 38th International Conference on Machine Learning, PMLR 139:2421-2432, 2021.

Abstract

We propose Intermediate Layer Optimization (ILO), a novel optimization algorithm for solving inverse problems with deep generative models. Instead of optimizing only over the initial latent code, we progressively change the input layer obtaining successively more expressive generators. To explore the higher dimensional spaces, our method searches for latent codes that lie within a small l1 ball around the manifold induced by the previous layer. Our theoretical analysis shows that by keeping the radius of the ball relatively small, we can improve the established error bound for compressed sensing with deep generative models. We empirically show that our approach outperforms state-of-the-art methods introduced in StyleGAN2 and PULSE for a wide range of inverse problems including inpainting, denoising, super-resolution and compressed sensing.

Cite this Paper


BibTeX
@InProceedings{pmlr-v139-daras21a, title = {Intermediate Layer Optimization for Inverse Problems using Deep Generative Models}, author = {Daras, Giannis and Dean, Joseph and Jalal, Ajil and Dimakis, Alex}, booktitle = {Proceedings of the 38th International Conference on Machine Learning}, pages = {2421--2432}, year = {2021}, editor = {Meila, Marina and Zhang, Tong}, volume = {139}, series = {Proceedings of Machine Learning Research}, month = {18--24 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v139/daras21a/daras21a.pdf}, url = {https://proceedings.mlr.press/v139/daras21a.html}, abstract = {We propose Intermediate Layer Optimization (ILO), a novel optimization algorithm for solving inverse problems with deep generative models. Instead of optimizing only over the initial latent code, we progressively change the input layer obtaining successively more expressive generators. To explore the higher dimensional spaces, our method searches for latent codes that lie within a small l1 ball around the manifold induced by the previous layer. Our theoretical analysis shows that by keeping the radius of the ball relatively small, we can improve the established error bound for compressed sensing with deep generative models. We empirically show that our approach outperforms state-of-the-art methods introduced in StyleGAN2 and PULSE for a wide range of inverse problems including inpainting, denoising, super-resolution and compressed sensing.} }
Endnote
%0 Conference Paper %T Intermediate Layer Optimization for Inverse Problems using Deep Generative Models %A Giannis Daras %A Joseph Dean %A Ajil Jalal %A Alex Dimakis %B Proceedings of the 38th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2021 %E Marina Meila %E Tong Zhang %F pmlr-v139-daras21a %I PMLR %P 2421--2432 %U https://proceedings.mlr.press/v139/daras21a.html %V 139 %X We propose Intermediate Layer Optimization (ILO), a novel optimization algorithm for solving inverse problems with deep generative models. Instead of optimizing only over the initial latent code, we progressively change the input layer obtaining successively more expressive generators. To explore the higher dimensional spaces, our method searches for latent codes that lie within a small l1 ball around the manifold induced by the previous layer. Our theoretical analysis shows that by keeping the radius of the ball relatively small, we can improve the established error bound for compressed sensing with deep generative models. We empirically show that our approach outperforms state-of-the-art methods introduced in StyleGAN2 and PULSE for a wide range of inverse problems including inpainting, denoising, super-resolution and compressed sensing.
APA
Daras, G., Dean, J., Jalal, A. & Dimakis, A.. (2021). Intermediate Layer Optimization for Inverse Problems using Deep Generative Models. Proceedings of the 38th International Conference on Machine Learning, in Proceedings of Machine Learning Research 139:2421-2432 Available from https://proceedings.mlr.press/v139/daras21a.html.

Related Material