Deep multi-level fusion network for multi-source image pixel-wise classification

2021 
Abstract For multi-source image pixel-wise classification, each image information is different and complementary in the same area or scene. However, how to integrate them for decision-making is a difficult problem. In this paper, we focus on the characteristics of multi-source image and propose a novel pixel-wise classification method, named deep multi-level fusion network. The proposed method is to classify multi-sensor data including very high-resolution (VHR) RGB imagery, hyperspectral imagery (HSI) and multispectral light detection and ranging (MS-LiDAR) point cloud data. First, a deep spectral–spatial attention network is proposed to process HSI and MS-LiDAR images and get a learned classification map, which is based on feature level fusion. Next, a down-superpixel segmentation algorithm is proposed to get a segmentation result for VHR RGB imagery. Finally, the feature level fusion results are refinement by the down-superpixel segmentation results on the decision level, and get the final result. Extensive experiments and analyses on the data set g r s s _ d f c _ 2018 demonstrate that the proposed multi-level fusion network can achieve a better result in the multi-source image pixel-wise classification.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    51
    References
    4
    Citations
    NaN
    KQI
    []