Better Learning and Fusing Multi-Granularity Context Representations for Relevant Response Generation

2021 
In open-domain multi-turn dialogue, capturing related information from conversational history is key to generating relevant responses. In this paper, we propose a method to extract and fuse related context information at different levels of granularity with a word-level encoder, an utterance-level encoder, and a fusion decoder. The word-level encoder aims to obtain a context-aware representation for the last utterance, called a post, by attending the post to the context utterances at the word level. The utterance-level encoder is used to model the correlation among context utterances with the self-matching attention mechanism. Finally, the multi-granularity context representations from the two encoders are fused via the fusion decoder to improve the relevance of generated responses. Experimental results on the Ubuntu Dialog Corpus and Cornell Movie Dialog Corpus show that our methods significantly outperform all the strong baselines.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    20
    References
    0
    Citations
    NaN
    KQI
    []