Adaptive Attention-Aware Network for Unsupervised Person Re-identification

2020 
Abstract Person re-identification (Re-ID) has attracted more attention in computer vision tasks recently and achieved high accuracy in some public available datasets in a supervised manner. The performance drops significantly when datasets are unlabeled, which limits the scalability of Re-ID algorithms in practical applications. Despite some unsupervised methods are proposed to address the scalability problem of Re-ID, it’s hard to learn discriminative feature representations due to the lack of pairwise labels in different camera views. To overcome this problem, we propose an end-to-end network named Adaptive Attention-Aware Network for unsupervised person re-identification. Specifically, we propose a novel adaptive attention-aware module that could be easily embedded into Re-ID architecture. The proposed module focuses on learning strong expressive relationship among channels of feature maps, and alleviating the key problems of Re-ID, e.g., occlusion and local deformation. In addition, we extract the camera-invariant features by adopting camera-style transfer feature learning since matching pairs in Re-ID suffers from appearance changes under different camera views. Besides, unsupervised hard negative mining is introduced to learn large intra-person appearance variance and discriminate high inter-person appearance similarity in an unlabeled target dataset with an auxiliary labeled dataset. Comprehensive experiments on three public available Re-ID datasets demonstrate that our method can achieve the state-of-the-art results of unsupervised Re-ID and is competitive with supervised learning.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    63
    References
    2
    Citations
    NaN
    KQI
    []