LatentGNN: Learning Efficient Nonlocal Relations for Visual Recognition
[edit]
Proceedings of the 36th International Conference on Machine Learning, PMLR 97:73747383, 2019.
Abstract
Capturing longrange dependencies in feature representations is crucial for many visual recognition tasks. Despite recent successes of deep convolutional networks, it remains challenging to model nonlocal context relations between visual features. A promising strategy is to model the feature context by a fullyconnected graph neural network (GNN), which augments traditional convolutional features with an estimated nonlocal context representation. However, most GNNbased approaches require computing a dense graph affinity matrix and hence have difficulty in scaling up to tackle complex realworld visual problems. In this work, we propose an efficient and yet flexible nonlocal relation representation based on a novel class of graph neural networks. Our key idea is to introduce a latent space to reduce the complexity of graph, which allows us to use a lowrank representation for the graph affinity matrix and to achieve a linear complexity in computation. Extensive experimental evaluations on three major visual recognition tasks show that our method outperforms the prior works with a large margin while maintaining a low computation cost.
Related Material


