Context-Awar Generation-Based Net For Multi-Label Visual Emotion Recognition

2020 
Visual Emotion Recognition has attracted more and more research attention in recent years. Existing approaches mainly depend on facial expression or analyze the whole image between positive and negative. Actually, people can recognize multiple emotions from one image based on global and 10-cal information. In this paper, we propose a Context-Aware Generation-Based Net (CAGBN), a novel architecture that makes full use of global and local information of the image by considering both the whole image and details of the target person. Inspired by psychological studies that when viewing a person in his situation, we tend to give judgments gradually rather than assign all labels at the same time, CAGBN transforms the multi-label classification problem into a sequence generation task for better recognition. Extensive experimental results on the emotion recognition dataset demonstrate the superiority and rationality of CAGBN.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    4
    Citations
    NaN
    KQI
    []