3D-GAT: 3D-Guided Adversarial Transform Network for Person Re-identification in Unseen Domains

2021 
Abstract Person Re-identification (ReID) has witnessed remarkable improvements in the past couple of years. However, its applications in real-world scenarios are limited by the disparity among different cameras and datasets. In general, it remains challenging to generalize ReID algorithms from one domain to another, especially when the target domain is unknown. To solve this issue, we develop a 3D-guided adversarial transform (3D-GAT) network which explores the transfer ability of source training data to facilitate learning domain-independent knowledge. Being aware of a 3D model and human poses, 3D-GAT makes use of image-to-image translation to synthesize person images in different conditions whilst preserving features for identification as much as possible. With these augmented training data, it is easier for ReID approaches to perceive how a person can appear differently under varying viewpoints and poses, most of which are not seen in the training data, and thus achieve higher ReID accuracy especially in an unknown domain. Extensive experiments conducted on Market-1501, DukeMTMC-reID and CUHK03 demonstrate the effectiveness of our proposed approach, which is competitive to the baseline models in the original dataset and sets the new state-of-the-art in direct transfer to other datasets.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    61
    References
    2
    Citations
    NaN
    KQI
    []