A Comparative Analysis and Study of Multiview CNN Models for Joint Object Categorization and Pose Estimation


Mohamed Elhoseiny, Tarek El-Gaaly, Amr Bakry, Ahmed Elgammal ;
Proceedings of The 33rd International Conference on Machine Learning, PMLR 48:888-897, 2016.


In the Object Recognition task, there exists a dichotomy between the categorization of objects and estimating object pose, where the former necessitates a view-invariant representation, while the latter requires a representation capable of capturing pose information over different categories of objects. With the rise of deep architectures, the prime focus has been on object category recognition. Deep learning methods have achieved wide success in this task. In contrast, object pose estimation using these approaches has received relatively less attention. In this work, we study how Convolutional Neural Networks (CNN) architectures can be adapted to the task of simultaneous object recognition and pose estimation. We investigate and analyze the layers of various CNN models and extensively compare between them with the goal of discovering how the layers of distributed representations within CNNs represent object pose information and how this contradicts with object category representations. We extensively experiment on two recent large and challenging multi-view datasets and we achieve better than the state-of-the-art.

Related Material