Body Part-Level Domain Alignment for Domain-Adaptive Person Re-Identification With Transformer Framework

2022 
Although existing domain-adaptive person re-identification (re-ID) methods have achieved competitive per- formance, most of them highly rely on the reliability of pseudo-label prediction, which seriously limits their applicability as noisy labels cannot be avoided. This paper designs a Transformer framework based on body part-level domain alignment to solve the above-mentioned issues in domain-adaptive person re-ID. Different parts of the human body (such as head, torso, and legs) have different structures and shapes. Therefore, they usually exhibit different characteristics. The proposed method makes full use of the dissimilarity between different human body parts. Specifically, the local features from the same body part are aggregated by the Transformer to obtain the corresponding class token, which is used as the global representation of this body part. Additionally, a Transformer layer-embedded adversarial learning strategy is designed. This strategy can simultaneously achieve domain alignment and classification of the class token for each human body part in both target and source domains by an integrated discriminator, thereby realizing domain alignment at human body part level. Compared with existing domain-level and identity-level alignment methods, the proposed method has a stronger fine-grained domain alignment capability. Therefore, the information loss or distortion that may occur in the feature alignment process can be effectively alleviated. The proposed method does not need to predict pseudo labels of any target sample, so the negative impact caused by unreliable pseudo labels on re-ID performance can be effectively avoided. Compared with state-of-the-art methods, the proposed method achieves better performance on the datasets that are in line with real-world scene settings.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []