Camera Style Guided Feature Generation for Person Re-identification

2020 
Camera variance has always been a troublesome matter in person re-identification (re-ID). Recently, more and more interests have grown in alleviating the camera variance problem by data augmentation through generative models. However, these methods, mostly based on image-level generative adversarial networks (GANs), require huge computational power during the training process of generative models. In this paper, we propose to solve the person re-ID problem by adopting a feature level camera-style guided GAN, which can serve as an intra-class augmentation method to enhance the model robustness against camera variance. Specifically, the proposed method makes camera-style transfer on input features while preserving the corresponding identity information. Moreover, the training process can be directly injected into the re-ID task in an end-to-end manner, which means we can deploy our methods with much less time and space costs. Experiments show the validity of the generative model and its benefits towards re-ID performance on Market-1501 and DukeMTMC-reID datasets.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    37
    References
    0
    Citations
    NaN
    KQI
    []