Visible-Infrared Person Re-Identification via Partially Interactive Collaboration

2022 
Visible-infrared person re-identification (VI-ReID) task aims to retrieve the same person between visible and infrared images. VI-ReID is challenging as the images captured by different spectra present large cross-modality discrepancy. Many methods adopt a two-stream network and design additional constraint conditions to extract shared features for different modalities. However, the interaction between the feature extraction processes of different modalities is rarely considered. In this paper, a partially interactive collaboration method is proposed to exploit the complementary information of different modalities to reduce the modality gap for VI-ReID. Specifically, the proposed method is achieved in a partially interactive-shared architecture: collaborative shallow layers and shared deep layers. The collaborative shallow layers consider the interaction between modality-specific features of different modalities, encouraging the feature extraction processes of different modalities constrain each other to enhance feature representations. The shared deep layers further embed the modality-specific features to a common space to endow them the same identity discriminability. To ensure the interactive collaborative learning implement effectively, the conventional loss and collaborative loss are utilized jointly to train the whole network. Extensive experiments on two publicly available VI-ReID datasets verify the superiority of the proposed PIC method. Specifically, the proposed method achieves a rank-1 accuracy of 83.6% and 57.5% on RegDB and SYSU-MM01 datasets, respectively.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    50
    References
    0
    Citations
    NaN
    KQI
    []