Multi-model transfer and optimization for cloze task

2020 
Substantial progress has been made recently in training context-aware language models. CLOTH is a human created cloze dataset, which can better evaluate machine reading comprehension. Although the author of CLOTH has done many experiments on BERT and context2wec, it is still worth studying the performance of other models. We applied the CLOTH dataset to other models and evaluated their performance based on different model mechanisms. The results showed that ALBERT performed well on the cloze task. The accuracy of ALBERT is 92.24%, which is 6.34% higher than the human performance. In addition, we introduce adversarial training into the model. Experiments show that adversarial training has significant effects in improving the robustness and accuracy of the model. On the BERT-large model, the accuracy rate is up to 0.15% after using adversarial training.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    2
    References
    0
    Citations
    NaN
    KQI
    []