Visible-infrared Person Re-identification via Colorization-based Siamese Generative Adversarial Network.

2020 
With explosive surveillance data during day and night, visible-infrared person re-identification (VI-ReID) is an emerging challenge due to the apparent cross-modality discrepancy between visible and infrared images. Existing VI-ReID work mainly focuses on learning a robust feature to represent a person in both modalities despite the modality gap cannot be effectively eliminated. Recent research works have proposed various generative adversarial network (GAN) models to transfer the visible modality to another unified modality, aiming to bridge the cross-modality gap. However, they neglect the information loss caused by transferring the domain of visible images which is significant for identification. To effectively address the problems, we observe that key information such as textures and semantics in an infrared image can help to color the image itself and the colored infrared image maintains rich information from infrared image while reducing the discrepancy with the visible image. We therefore propose a colorization-based Siamese generative adversarial network (CoSiGAN) for VI-ReID to bridge the cross-modality gap, by retaining the identity of the colored infrared image. Furthermore, we also propose a feature-level fusion model to supplement the transfer loss of colorization. The experiments conducted on two cross-modality person re-identification datasets demonstrate the superiority of the proposed method compared with the state-of-the-arts.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    29
    References
    1
    Citations
    NaN
    KQI
    []