Processing Megapixel Images with Deep Attention-Sampling Models

Angelos Katharopoulos, Francois Fleuret
Proceedings of the 36th International Conference on Machine Learning, PMLR 97:3282-3291, 2019.

Abstract

Existing deep architectures cannot operate on very large signals such as megapixel images due to computational and memory constraints. To tackle this limitation, we propose a fully differentiable end-to-end trainable model that samples and processes only a fraction of the full resolution input image. The locations to process are sampled from an attention distribution computed from a low resolution view of the input. We refer to our method as attention sampling and it can process images of several megapixels with a standard single GPU setup. We show that sampling from the attention distribution results in an unbiased estimator of the full model with minimal variance, and we derive an unbiased estimator of the gradient that we use to train our model end-to-end with a normal SGD procedure. This new method is evaluated on three classification tasks, where we show that it allows to reduce computation and memory footprint by an order of magnitude for the same accuracy as classical architectures. We also show the consistency of the sampling that indeed focuses on informative parts of the input images.

Cite this Paper


BibTeX
@InProceedings{pmlr-v97-katharopoulos19a, title = {Processing Megapixel Images with Deep Attention-Sampling Models}, author = {Katharopoulos, Angelos and Fleuret, Francois}, booktitle = {Proceedings of the 36th International Conference on Machine Learning}, pages = {3282--3291}, year = {2019}, editor = {Chaudhuri, Kamalika and Salakhutdinov, Ruslan}, volume = {97}, series = {Proceedings of Machine Learning Research}, month = {09--15 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v97/katharopoulos19a/katharopoulos19a.pdf}, url = {https://proceedings.mlr.press/v97/katharopoulos19a.html}, abstract = {Existing deep architectures cannot operate on very large signals such as megapixel images due to computational and memory constraints. To tackle this limitation, we propose a fully differentiable end-to-end trainable model that samples and processes only a fraction of the full resolution input image. The locations to process are sampled from an attention distribution computed from a low resolution view of the input. We refer to our method as attention sampling and it can process images of several megapixels with a standard single GPU setup. We show that sampling from the attention distribution results in an unbiased estimator of the full model with minimal variance, and we derive an unbiased estimator of the gradient that we use to train our model end-to-end with a normal SGD procedure. This new method is evaluated on three classification tasks, where we show that it allows to reduce computation and memory footprint by an order of magnitude for the same accuracy as classical architectures. We also show the consistency of the sampling that indeed focuses on informative parts of the input images.} }
Endnote
%0 Conference Paper %T Processing Megapixel Images with Deep Attention-Sampling Models %A Angelos Katharopoulos %A Francois Fleuret %B Proceedings of the 36th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2019 %E Kamalika Chaudhuri %E Ruslan Salakhutdinov %F pmlr-v97-katharopoulos19a %I PMLR %P 3282--3291 %U https://proceedings.mlr.press/v97/katharopoulos19a.html %V 97 %X Existing deep architectures cannot operate on very large signals such as megapixel images due to computational and memory constraints. To tackle this limitation, we propose a fully differentiable end-to-end trainable model that samples and processes only a fraction of the full resolution input image. The locations to process are sampled from an attention distribution computed from a low resolution view of the input. We refer to our method as attention sampling and it can process images of several megapixels with a standard single GPU setup. We show that sampling from the attention distribution results in an unbiased estimator of the full model with minimal variance, and we derive an unbiased estimator of the gradient that we use to train our model end-to-end with a normal SGD procedure. This new method is evaluated on three classification tasks, where we show that it allows to reduce computation and memory footprint by an order of magnitude for the same accuracy as classical architectures. We also show the consistency of the sampling that indeed focuses on informative parts of the input images.
APA
Katharopoulos, A. & Fleuret, F.. (2019). Processing Megapixel Images with Deep Attention-Sampling Models. Proceedings of the 36th International Conference on Machine Learning, in Proceedings of Machine Learning Research 97:3282-3291 Available from https://proceedings.mlr.press/v97/katharopoulos19a.html.

Related Material