A Pre-LN Transformer Network Model with Lexical Features for Fine-Grained Sentiment Classification

2021 
Sentiment classification is an important task of sentiment analysis, which aims to identify different sentiment polarity in subjective text. Although most existing models can effectively identify the extreme polarity (extremely positive, extremely negative), we find they cannot distinguish the intermediate polarity (generally positive, neutral, generally negative) clearly. Besides, the models based on convolutional neural networks (CNNs) and recurrent neural networks (RNNs) also have some problems, such as weak parallel computing power and poor long-distance dependence capacity. This paper proposes a new model based on Pre-LN Transformer and lexical features, which can improve fine-grained sentiment classification of online reviews. In this work, the Pre-LN Transformer encoder with multi-headed self-attention captures hidden features of different subspaces. Unlike the Post-LN Transformer, the Pre-LN Transformer places the normalization layer in the residual block to make the model more stable. On this basis, we reconstruct the Vader lexicon and further integrate sentiment lexical features extracted from the lexicon into the model. We perform sentiment classification tasks on two publicly available online review datasets. Experimental results show that our model achieves state-of-art performance while distinguishing fine-grained sentiment.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    25
    References
    0
    Citations
    NaN
    KQI
    []