Semantic-Guided Shared Feature Alignment for Occluded Person Re-IDentification
Proceedings of The 12th Asian Conference on Machine Learning, PMLR 129:17-32, 2020.
Occluded Person Re-ID is a challenging task under resolved. Instead of extracting features over the entire image which would easily cause mismatching, we propose Semantic-Guided Shared Feature Alignment (SGSFA) method to extract features focusing on the non-occluded parts. SGSFA parses human body regions through Semantic Guided (SG) branch and aligns regions through Spatial Feature Alignment (SFA) branch simultaneously, and gets enriched representations over the regions for Re-ID. Dynamic classification loss of spatial features and their dynamical sequential combinations in the training stage help facilitate feature diversity. During the matching stage, we use only the visible feature shared by probe and gallery with no extra cues. The experiment results show that SGSFA achieves rank-1 of 62.3% and 50.5% respectively for Occluded-DukeMTMC and P-DukeMTMC-reID, surpassing the state-of-the-art by a large margin.