STCP: An Efficient Model Combining Subject Triples and Constituency Parsing for Recognizing Textual Entailment

2021 
Recognizing Textual Entailment (RTE) aims at automatically determining the logical relationship between the given premise and hypothesis, and good recognition ability would be helpful for other natural language understanding tasks. Based on the Bidirectional Encoder Representation from Transformer (BERT), this paper proposes a model combining Subject Triples and Constituency Parsing (STCP) for RTE. Specifically, the model combines the central subject triples to fine-tune BERT, which obtains all hidden layers’ features weighted by attention, to capture the global semantic information. The sentences with constituency syntax parsing are encoded by Tree Long Short-Term Memory (Tree-LSTM) to obtain the local structural information. Finally, the global semantic information and the local structural information are incorporated by constructing the matrix splicing fusion module, to enhance the ability to recognize semantic logical relationships. Experimental results show that the STCP model can achieve better recognition performance than the benchmark models on the public datasets SNLI and MNLI, and the effectiveness of the model is verified through ablation experiments and visualization of attention weights.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []