A Gather-to-Guide Network for Remote Sensing Semantic Segmentation of RGB and Auxiliary Image

2021 
Convolutional neural network (CNN)-based feature fusion of RGB and auxiliary remote sensing data is known to enable improved semantic segmentation. However, such fusion is challengeable because of the substantial variance in data characteristics and quality (e.g., data uncertainties and misalignment) between two modality data. In this article, we propose a unified gather-to-guide network (G2GNet) for remote sensing semantic segmentation of RGB and auxiliary data. The key aspect of the proposed architecture is a novel gather-to-guide module (G2GM) that consists of a feature gatherer and a feature guider. The feature gatherer generates a set of cross-modal descriptors by absorbing the complementary merits of RGB and auxiliary modality data. The feature guider calibrates the RGB feature response by using the channel-wise guide weights extracted from the cross-modal descriptors. In this way, the G2GM can perform RGB feature calibration with different modality data in a gather-to-guide fashion, thus preserving the informative features while suppressing redundant and noisy information. Extensive experiments conducted on two benchmark datasets show that the proposed G2GNet is robust to data uncertainties while also improving the semantic segmentation performance of RGB and auxiliary remote sensing data.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []