MCDALNet: Multi-scale Contextual Dual Attention Learning Network for Medical Image Segmentation

2021 
Medical image segmentation has been widely studied, and many methods have been proposed. Among the existing methods, U-Net and its variants have achieved a promising performance. However, these methods miss certain areas because they only generate fixed-scale receptive fields in each layer of the encoder and cannot establish rich contextual dependencies on the fusion features in the decoder. To solve these problems, this paper proposes a multi-scale contextual dual attention learning network (named MCDALNet) to capture multi-scale information and the dependencies of spatial and channel features. MCDALNet contains two components: an encoder with three multi-scale contextual learning (MCL) modules and a decoder with three dual attention modules. The MCL module extracts multi-scale context information from low-level features through the split-transform-merge-residual architecture. The dual attention module consists of a position attention sub-module and a channel attention submodule, which improve the feature representation and help the medical image segmentation. The position attention submodule captures spatial dependencies by learning similar spatial features, and the channel attention sub-module captures channel dependencies by learning relevant features on the channel maps. Experiment results show that our approach achieves significant improvement in medical image segmentation and outperforms the representative deep learning models on public datasets.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    19
    References
    0
    Citations
    NaN
    KQI
    []