Deep Multimodal Fusion Network for Semantic Segmentation Using Remote Sensing Image and LiDAR Data

2021 
Extracting semantic information from very-high-resolution (VHR) aerial images is a prominent topic in the Earth observation research. An increasing number of different sensor platforms are appearing in remote sensing, each of which can provide corresponding multimodal supplemental or enhanced information, such as optical images, light detection and ranging (LiDAR) point clouds, infrared images, or inertial measurement unit (IMU) data. However, these current deep networks for LiDAR and VHR images have not fully utilized the complete potential of multimodal data. The stacked multimodal fusion network (MFNet) ignores the structural differences between the modalities and the manual statistical characteristics within the modalities. For multimodal remote sensing data and its corresponding carefully designed handcrafted features, we designed a novel deep MFNet that can use multimodal VHR aerial images and LiDAR data and the corresponding intramodal features, such as LiDAR-derived features [slope and normalized digital surface model (NDSM)] and imagery-derived features [infrared-red-green (IRRG), normalized difference vegetation index (NDVI), and difference of Gaussian (DoG)]. Technically, we introduce the attention mechanism and multimodal learning to adaptively fuse intermodal and intramodal features. Specifically, we designed a multimodal fusion mechanism, pyramid dilation blocks, and a multilevel feature fusion module. Through these modules, our network realized the adaptive fusion of multimodal features, improved the receptive field, and enhanced the global-to-local contextual fusion effect. Moreover, we used a multiscale supervision training scheme to optimize the network. Extensive experimental results and ablation studies on the ISPRS semantic dataset and IEEE GRSS DFC Zeebrugge dataset show the effectiveness of our proposed MFNet.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []