Optimization for Amortized Inverse Problems

Tianci Liu, Tong Yang, Quan Zhang, Qi Lei
Proceedings of the 40th International Conference on Machine Learning, PMLR 202:22289-22319, 2023.

Abstract

Incorporating a deep generative model as the prior distribution in inverse problems has established substantial success in reconstructing images from corrupted observations. Notwithstanding, the existing optimization approaches use gradient descent largely without adapting to the non-convex nature of the problem and can be sensitive to initial values, impeding further performance improvement. In this paper, we propose an efficient amortized optimization scheme for inverse problems with a deep generative prior. Specifically, the optimization task with high degrees of difficulty is decomposed into optimizing a sequence of much easier ones. We provide a theoretical guarantee of the proposed algorithm and empirically validate it on different inverse problems. As a result, our approach outperforms baseline methods qualitatively and quantitatively by a large margin.

Cite this Paper


BibTeX
@InProceedings{pmlr-v202-liu23au, title = {Optimization for Amortized Inverse Problems}, author = {Liu, Tianci and Yang, Tong and Zhang, Quan and Lei, Qi}, booktitle = {Proceedings of the 40th International Conference on Machine Learning}, pages = {22289--22319}, year = {2023}, editor = {Krause, Andreas and Brunskill, Emma and Cho, Kyunghyun and Engelhardt, Barbara and Sabato, Sivan and Scarlett, Jonathan}, volume = {202}, series = {Proceedings of Machine Learning Research}, month = {23--29 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v202/liu23au/liu23au.pdf}, url = {https://proceedings.mlr.press/v202/liu23au.html}, abstract = {Incorporating a deep generative model as the prior distribution in inverse problems has established substantial success in reconstructing images from corrupted observations. Notwithstanding, the existing optimization approaches use gradient descent largely without adapting to the non-convex nature of the problem and can be sensitive to initial values, impeding further performance improvement. In this paper, we propose an efficient amortized optimization scheme for inverse problems with a deep generative prior. Specifically, the optimization task with high degrees of difficulty is decomposed into optimizing a sequence of much easier ones. We provide a theoretical guarantee of the proposed algorithm and empirically validate it on different inverse problems. As a result, our approach outperforms baseline methods qualitatively and quantitatively by a large margin.} }
Endnote
%0 Conference Paper %T Optimization for Amortized Inverse Problems %A Tianci Liu %A Tong Yang %A Quan Zhang %A Qi Lei %B Proceedings of the 40th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2023 %E Andreas Krause %E Emma Brunskill %E Kyunghyun Cho %E Barbara Engelhardt %E Sivan Sabato %E Jonathan Scarlett %F pmlr-v202-liu23au %I PMLR %P 22289--22319 %U https://proceedings.mlr.press/v202/liu23au.html %V 202 %X Incorporating a deep generative model as the prior distribution in inverse problems has established substantial success in reconstructing images from corrupted observations. Notwithstanding, the existing optimization approaches use gradient descent largely without adapting to the non-convex nature of the problem and can be sensitive to initial values, impeding further performance improvement. In this paper, we propose an efficient amortized optimization scheme for inverse problems with a deep generative prior. Specifically, the optimization task with high degrees of difficulty is decomposed into optimizing a sequence of much easier ones. We provide a theoretical guarantee of the proposed algorithm and empirically validate it on different inverse problems. As a result, our approach outperforms baseline methods qualitatively and quantitatively by a large margin.
APA
Liu, T., Yang, T., Zhang, Q. & Lei, Q.. (2023). Optimization for Amortized Inverse Problems. Proceedings of the 40th International Conference on Machine Learning, in Proceedings of Machine Learning Research 202:22289-22319 Available from https://proceedings.mlr.press/v202/liu23au.html.

Related Material