Detail-Aware Multiscale Context Fusion Network for Cloud Detection

2022 
In recent years, a large number of convolutional neural networks (CNNs)-based cloud detection algorithms have been proposed for remote sensing image preprocessing and most of them have an encoder–decoder structure. However, downsampling and upsampling operations, as the basic components of these methods, inevitably lead to the loss of detailed information in high-level features, which affects cloud detection performance. At the same time, the physical characteristics of the cloud, such as the variable size and irregular structure, also put forward requirements for the multiscale feature representation ability of the network. To this end, we propose a novel cloud detection network named DMNet, which contains a dense feature enhancement module (DFEM) and a multiscale context fusion spatial attention module (MCFSAM). DFEM aims to achieve information complementarity by exploiting the different properties of the features at different levels of the encoder, so as to strengthen the detailed information of high-level features and make the low-level features have more semantics. MCFSAM introduces a multiscale context fusion block (MCFB) in spatial attention, which enables the network to densely capture contextual information at different scales and further emphasize useful features in the spatial dimension. Extensive experiments on GF-1 wide field-of-view satellite imagery (GF-1 WFV) dataset and Moderate-Resolution Imaging Spectroradiometer (MODIS) dataset demonstrate that our method outperforms other state-of-the-art cloud detection algorithms.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    25
    References
    0
    Citations
    NaN
    KQI
    []