Learning object-centric complementary features for zero-shot learning

2020 
Abstract Zero-shot learning (ZSL) aims to recognize new objects that have never seen before by associating categories with their semantic knowledge. Existing works mainly focus on learning better visual-semantic mapping to align the visual and semantic space, while the effectiveness of learning discriminative visual features is neglected. In this paper, we propose an object-centric complementary features (OCF) learning model to take full advantage of visual information of objects with the guidance of semantic knowledge. This model can automatically discover the object region and obtain fine-scale samples without any human annotation. Then, the attention mechanism is used in our model to capture long-range visual features corresponding to semantic knowledge like ‘four legs’ and subtle visual differences between similar categories. Finally, we train our model with the guidance of semantic knowledge in an end-to-end manner. Our method is evaluated on three widely used ZSL datasets, CUB, AwA2, and FLO, and the experiment results demonstrate the efficacy of the object-centric complementary features, and our proposed method outperforms the state-of-the-art methods.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    16
    References
    1
    Citations
    NaN
    KQI
    []