Bi-SimCut: A Simple Strategy for Boosting Neural Machine Translation

2022 
We introduce Bi-SimCut: a simple but effective training strategy to boost neural machine translation (NMT) performance. It consists of two procedures: bidirectional pretraining and unidirectional finetuning. Both procedures utilize SimCut, a simple regularization method that forces the consistency between the output distributions of the original and the cutoff sentence pairs. Without leveraging extra dataset via back-translation or integrating large-scale pretrained model, Bi-SimCut achieves strong translation performance across five translation benchmarks (data sizes range from 160K to 20.2M): BLEU scores of 31.16 for $\texttten\rightarrow\textttde$ and 38.37 for $\textttde\rightarrow\texttten$ on the IWSLT14 dataset, 30.78 for $\texttten\rightarrow\textttde$ and 35.15 for $\textttde\rightarrow\texttten$ on the WMT14 dataset, and 27.17 for $\textttzh\rightarrow\texttten$ on the WMT17 dataset. SimCut is not a new method, but a version of Cutoff (Shen et al., 2020) simplified and adapted for NMT, and it could be considered as a perturbation-based method. Given the universality and simplicity of Bi-SimCut and SimCut, we believe they can serve as strong baselines for future NMT research.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []