A Survey of the State-of-the-Art Models in Neural Abstractive Text Summarization

2021 
Dealing with vast amounts of textual data requires the use of efficient systems. Automatic summarization systems are capable of addressing this issue. Therefore, it becomes highly essential to work on the design of existing automatic summarization systems and innovate them to make them capable of meeting the demands of continuously increasing data, based on user needs. This study tends to survey the scientific literature to obtain information and knowledge about the recent research in automatic text summarization specifically abstractive summarization based on neural networks. A review of various neural networks based abstractive summarization models have been presented. The proposed conceptual framework includes five key elements identified as encoder-decoder architecture, mechanisms, training strategies and optimization algorithms, dataset, and evaluation metric. A description of these elements is also included in this article. The purpose of this research is to provide an overall understanding and familiarity with the elements of recent neural networks based abstractive text summarization models with an up-to-date review as well as to render an awareness of the challenges and issues with these systems. Analysis has been performed qualitatively with the help of a concept matrix indicating common trends in the design of recent neural abstractive summarization systems. Models employing a transformer-based encoder-decoder architecture are found to be the new state-of-the-art. Based on the knowledge acquired from the survey, this article suggests the use of pre-trained language models in complement with neural network architecture for abstractive summarization task.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    50
    References
    3
    Citations
    NaN
    KQI
    []