Dual Mutual Learning for Cross-Modality Person Re-Identification

2022 
Cross-modality person re-identification (Re-ID) is more challenging than traditional visible Re-ID due to the huge cross-modality gap from heterogeneous images. To alleviate this problem, existing methods often utilize a dual path learning framework equipped with metric loss to learn discriminative features. Despite effectiveness, the inevitable degeneration of intra-modality discrimination by taking cross-modality discrimination into consideration is unsolvable. Such degeneration substantially hinders the model’s capability of further improving feature representations. To mitigate this degeneration, we propose a Dual Mutual Learning (DML) method for cross-modality Re-ID which conducts mutual learning between the cross-modality and each of two single modalities. We design a triple-branch deep model containing the RGB and IR branches and the cross-modality branch. The cross-modality branch is designed to learn modality-invariant feature subspace for appearance similarity measurement. Both the RGB branch and IR branch provide attention supervision information to the cross-modality branch for attention feature alignment so as to enhance the intra-modality discrimination. Experimental results on two standard benchmarks demonstrate DML is superior to state-of-the-art methods.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    49
    References
    1
    Citations
    NaN
    KQI
    []