FD-GAN: Pose-guided Feature Distilling GAN For Robust Person Re-identification

Authors:
Yixiao Ge The Chinese University of Hong Kong
Zhuowan Li Johns Hopkins University
Haiyu Zhao The Chinese University of Hong Kong
Guojun Yin University of Science and Technology of China
Shuai Yi The Chinese University of Hong Kong
Xiaogang Wang The Chinese University of Hong Kong
Hongsheng Li cuhk

Abstract:

Person re-identification (reID) is an important task that requires to retrieve a person's images from an image dataset, given one image of the person of interest. For learning robust person features, the pose variation of person images is one of the key challenges. Existing works targeting the problem either perform human alignment, or learn human-region-based representations. Extra pose information and computational cost is generally required for inference. To solve this issue, a Feature Distilling Generative Adversarial Network (FD-GAN) is proposed for learning identity-related and pose-unrelated representations. It is a novel framework based on a Siamese structure with multiple novel discriminators on human poses and identities. In addition to the discriminators, a novel same-pose loss is also integrated, which requires appearance of a same person's generated images to be similar. After learning pose-unrelated person features with pose guidance, no auxiliary pose information and additional computational cost is required during testing. Our proposed FD-GAN achieves state-of-the-art performance on three person reID datasets, which demonstrates that the effectiveness and robust feature distilling capability of the proposed FD-GAN.

You may want to know: