Accelerating Transformer for Neural Machine Translation

2021 
Neural Machine Translation (NMT) models based on Transformer achieve promising progress in both translation quality and training speed. Such a strong framework adopts parallel structures that greatly improve the decoding speed without losing quality. However, due to the self-attention network in decoder that cannot maintain the parallelization under the auto-regressive scheme, the Transformer did not enjoy the same speed performance as training when inference. In this work, with simplicity and feasibility in mind, we introduce a gated cumulative attention network to replace the self-attention part in Transformer decoder to maintain the parallelization property in the inference phase. The gated cumulative attention network includes two sub-layers, a gated linearly cumulative layer that creates the relationship between already predicted tokens and current representation, and a feature fusion layer that enhances the representation with a feature fusion operation. The proposed method was evaluated on WMT17 datasets with 12 language pair groups. Experimental results show the effectiveness of the proposed method and also demonstrated that the proposed gated cumulative attention network has adequate ability as an alternative to the self-attention part in the Transformer decoder.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    11
    References
    0
    Citations
    NaN
    KQI
    []