Building Interactive Sentence-aware Representation based on Generative Language Model for Community Question Answering

2020 
Abstract Semantic matching between question and answer sentences involves recognizing whether a candidate answer is relevant to a particular input question. Given the fact that such matching does not examine a question or an answer individually, context information outside the sentence should be considered equally important to the within-sentence syntactic context. This motivates us to design a new question-answer matching model, built upon a cross-sentence, context-aware, bi-directional long short-term memory architecture. The interactive attention mechanisms are proposed which automatically select salient positional sentence representations, that contribute more significantly towards the relevance between two question and answer. A new quantity called context information jump is proposed to facilitate the formulation of the attention weights, and is computed via the joint states of adjacent words. An interactive-aware sentence representation is constructed by connecting a combination of multiple sentence positional representations to each hidden representation state. In the experiments, the proposed method is compared with existed models, using four public community datasets, and the evaluations show that it is very competitive. In particular, it offers 0.32%-1.8% improvement over the best performing model for three out of four datasets, while for the remaining one performance is around 0.2% of the best performer.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    100
    References
    1
    Citations
    NaN
    KQI
    []