Enhanced distance-aware self-attention and multi-level match for sentence semantic matching

2022 
Sentence semantic matching is a core research area in natural language processing, which is widely used in various natural language tasks. In recent years, attention mechanism has shown good performance in deep neural networks for sentence semantic matching. Most of the attention-based deep neural networks focus on sentences interaction which ignore modeling the core semantic of the sentence. In other words, they do not consider the importance of the relative distance of words when modeling the sentence semantics, which leads to deviations in modeling the core semantics of the sentence and unstable sentence interaction. Usually, people tend to associate words that are relatively close together when they read and believe that there is a deeper connection between them. Besides, the current interactive matching method after sentence modeling is relatively simple and it may be inadequate. In this paper, we build a well-performed distance-aware self-attention and multi-level matching model (DSSTM) for sentence semantic matching tasks. By considering the importance of different distance tokens, it can get the better original semantics of sentences and hold interactive matching method in multiple level after sentence modeling. To be specific, given two input sentences, we first encode them as contextual embeddings. Then, the contextual embeddings are handled by enhanced distance-aware self-attention to further strengthen the sentence semantic modeling from the whole and local aspect. At the same time, we apply the co-attention layer to extract cross-sentence interaction features while simplifying all the remaining components. Finally, we fuse them into the multi-level matching function to obtain the aggregation vector and learn divers matching representations, which is helpful to capture the diversity of sentence pairs. We conduct experiments on three sentence semantic matching tasks. Experimental results on these public datasets demonstrate that our model outperforms competitive baseline methods and our model has fewer parameters. Our source code is publicly available at https://github.com/xiaodeng-1/DSSTM.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []