CONSK-GCN: Conversational Semantic- and Knowledge-Oriented Graph Convolutional Network for Multimodal Emotion Recognition

2021 
Emotion recognition in conversations (ERC) has received significant attention in recent years due to its widespread applications in diverse areas, such as social media, health care, and artificial intelligence interactions. However, different from nonconversational text, it is particularly challenging to model the effective context-aware dependence for the task of ERC. To address this problem, we propose a new Conversational Semantic- and Knowledge-oriented Graph Convolutional Network (ConSK-GCN) approach that leverages both semantic dependence and commonsense knowledge. First, we construct the contextual inter-interaction and intradependence of the interlocutors via a conversational graph-based convolutional network based on multimodal representations. Second, we incorporate commonsense knowledge to guide ConSK-GCN to model the semantic-sensitive and knowledge-sensitive contextual dependence. The results of extensive experiments show that the proposed method outperforms the current state of the art on the IEMOCAP dataset.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []