One Size Fits All: Can We Train One Denoiser for All Noise Levels?

Abhiram Gnanasambandam, Stanley Chan
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:3576-3586, 2020.

Abstract

When training an estimator such as a neural network for tasks like image denoising, it is often preferred to train one estimator and apply it to all noise levels. The de facto training protocol to achieve this goal is to train the estimator with noisy samples whose noise levels are uniformly distributed across the range of interest. However, why should we allocate the samples uniformly? Can we have more training samples that are less noisy, and fewer samples that are more noisy? What is the optimal distribution? How do we obtain such a distribution? The goal of this paper is to address this training sample distribution problem from a minimax risk optimization perspective. We derive a dual ascent algorithm to determine the optimal sampling distribution of which the convergence is guaranteed as long as the set of admissible estimators is closed and convex. For estimators with non-convex admissible sets such as deep neural networks, our dual formulation converges to a solution of the convex relaxation. We discuss how the algorithm can be implemented in practice. We evaluate the algorithm on linear estimators and deep networks.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-gnanasambandam20a, title = {One Size Fits All: Can We Train One Denoiser for All Noise Levels?}, author = {Gnanasambandam, Abhiram and Chan, Stanley}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {3576--3586}, year = {2020}, editor = {III, Hal Daumé and Singh, Aarti}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/gnanasambandam20a/gnanasambandam20a.pdf}, url = {https://proceedings.mlr.press/v119/gnanasambandam20a.html}, abstract = {When training an estimator such as a neural network for tasks like image denoising, it is often preferred to train one estimator and apply it to all noise levels. The de facto training protocol to achieve this goal is to train the estimator with noisy samples whose noise levels are uniformly distributed across the range of interest. However, why should we allocate the samples uniformly? Can we have more training samples that are less noisy, and fewer samples that are more noisy? What is the optimal distribution? How do we obtain such a distribution? The goal of this paper is to address this training sample distribution problem from a minimax risk optimization perspective. We derive a dual ascent algorithm to determine the optimal sampling distribution of which the convergence is guaranteed as long as the set of admissible estimators is closed and convex. For estimators with non-convex admissible sets such as deep neural networks, our dual formulation converges to a solution of the convex relaxation. We discuss how the algorithm can be implemented in practice. We evaluate the algorithm on linear estimators and deep networks.} }
Endnote
%0 Conference Paper %T One Size Fits All: Can We Train One Denoiser for All Noise Levels? %A Abhiram Gnanasambandam %A Stanley Chan %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-gnanasambandam20a %I PMLR %P 3576--3586 %U https://proceedings.mlr.press/v119/gnanasambandam20a.html %V 119 %X When training an estimator such as a neural network for tasks like image denoising, it is often preferred to train one estimator and apply it to all noise levels. The de facto training protocol to achieve this goal is to train the estimator with noisy samples whose noise levels are uniformly distributed across the range of interest. However, why should we allocate the samples uniformly? Can we have more training samples that are less noisy, and fewer samples that are more noisy? What is the optimal distribution? How do we obtain such a distribution? The goal of this paper is to address this training sample distribution problem from a minimax risk optimization perspective. We derive a dual ascent algorithm to determine the optimal sampling distribution of which the convergence is guaranteed as long as the set of admissible estimators is closed and convex. For estimators with non-convex admissible sets such as deep neural networks, our dual formulation converges to a solution of the convex relaxation. We discuss how the algorithm can be implemented in practice. We evaluate the algorithm on linear estimators and deep networks.
APA
Gnanasambandam, A. & Chan, S.. (2020). One Size Fits All: Can We Train One Denoiser for All Noise Levels?. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:3576-3586 Available from https://proceedings.mlr.press/v119/gnanasambandam20a.html.

Related Material