News text classification based on Bidirectional Encoder Representation from Transformers

2021 
In order to accurately and efficiently obtain information useful to us, people are paying more and more attention to the problem of data redundancy caused by excessive data information. In recent years, domestic and foreign researchers have proposed various frameworks for different natural language processing tasks, and different frameworks have different advantages and disadvantages. One of the classic problems in the field of natural language processing is text classification. News text classification is an important task that is easy to attract everyone's attention in our daily lives. This experiment is based on the BERT model under the Transformer framework to classify the news text data set. The same news text data set is compared with the RNN's long and short-term memory network. The evaluation index uses the general accuracy and loss value of the model classification. Experimental results show that the classification accuracy of the BERT model is significantly higher than that of the long and short-term memory network.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    3
    References
    0
    Citations
    NaN
    KQI
    []