Neural Joint Model for Part-of-Speech Tagging and Entity Extraction

2021 
Part-of-speech tagging and named entity recognition (NER) are fundamental sequential labeling tasks in natural language processing (NLP), where joint learning of both tasks is an effective one-step solution. Limited efforts have been made by existing research to meet such needs for Sindhi language. As POS tagging and NER are highly correlative sequence tagging tasks, so most often, a word recognized by the NER system may be recognized as a noun by a POS tagger. Thus, in this paper, we propose a neural joint model based on a bidirectional long-short term memory (BiLSTM) network and adversarial transfer learning to incorporate syntactic information from two tasks by using task-shared information. The syntactic structure captures and provides the information of long-range dependencies among words. Moreover, the self-attention is employed to capture intra-sentence dependencies to the joint model explicitly. Empirical results on two benchmark datasets show that our proposed joint model consistently and significantly surpass the existing methods.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    33
    References
    0
    Citations
    NaN
    KQI
    []