Alleviating Modality Bias Training for Infrared-Visible Person Re-Identification

2021 
The task of infrared-visible person re-identification (IV-reID) is to recognize people across two modalities (i.e., RGB and IR). Existing cutting-edge approaches normally use a pair of images that have the same IDs (i.e., ID-tied cross-modality image pairs) and input them into an ImageNet-trained ResNet50. The ResNet50 backbone model can learn shared features across modalities to tolerate modality discrepancies between RGB and IR. This work will unveil a Modality Bias Training (MBT) problem that is less discussed in IV-reID, which will demonstrate that MBT significantly compromises the performance of IVreID. Due to MBT, IR information can be overwhelmed by RGB information during training when the ResNet50 model is pretrained based on a large amount of RGB images from ImageNet. Thus, the trained models are more inclined to RGB information. Accordingly, the cross-modality generalization ability of the model is also compromised. To tackle this issue, we present a Dual-level Learning Strategy (DLS) that 1) enforces the focus of the network on ID-exclusive (rather than ID-tied) labels of cross-modality image pairs to mitigate the problem of MBT and 2) introduces third modality data that contain both RGB and IR information to further prevent the information from the IR modality from being overwhelmed during training. Our third modality images are generated by a generative adversarial network. A dynamic ID-exclusive Smooth (dIDeS) label is proposed for the generated third modality data. In experiments, without adopting a fancy network architecture, the effectiveness of the proposed DLS is verified by using the classic ID-discriminative Embedding (IDE) model. Comprehensive experiments are carried out to demonstrate the success of DLS in tackling the MBT issue exposed in IV-reID.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    4
    Citations
    NaN
    KQI
    []