A context-enhanced sentence representation learning method for close domains with topic modeling

2022 
Sentence representation approaches have been widely used and proven to be effective in many text modeling tasks and downstream applications. Many recent proposals are available on learning sentence representations based on deep neural frameworks. However, these methods are pre-trained in open domains and depend on the availability of large-scale data for model fitting. As a result, they may fail in some special scenarios, where data are sparse and embedding interpretations are required, such as legal, medical, or technical fields. In this paper, we present an unsupervised learning method to exploit representations of sentences for some closed domains via topic modeling. We reformulate the inference process of the sentences with the corresponding contextual sentences and the associated words, and propose an effective context-enhanced process called the bi-Directional Context-enhanced Sentence Representation Learning (bi-DCSR). This method takes advantage of the semantic distributions of the nearby contextual sentences and the associated words to form a context-enhanced sentence representation. To support the bi-DCSR, we develop a novel Bayesian topic model to embed sentences and words into the same latent interpretable topic space called the Hybrid Priors Topic Model (HPTM). Based on the defined topic space by the HPTM, the bi-DCSR method learns the embedding of a sentence by the two-directional contextual sentences and the words in it, which allows us to efficiently learn high-quality sentence representations in such closed domains. In addition to an open-domain dataset from Wikipedia, our method is validated using three closed-domain datasets from legal cases, electronic medical records, and technical reports. Our experiments indicate that the HPTM significantly outperforms on language modeling and topic coherence, compared with the existing topic models. Meanwhile, the bi-DCSR method does not only outperform the state-of-the-art unsupervised learning methods on closed domain sentence classification tasks, but also yields competitive performance compared to these established approaches on the open domain. Additionally, the visualizations of the semantics of sentences and words demonstrate the interpretable capacity of our model.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []