An Overview of Multimodal Remote Sensing Data Fusion: From Image to Feature, From Shallow to Deep

2021 
With the ever-growing availability of different remote sensing (RS) products from both satellite and airborne platforms, simultaneous processing and interpretation of multimodal RS data have shown increasing significance in the RS field. Different resolutions, contexts, and sensors of multimodal RS data enable the identification and recognition of the materials lying on the earth's surface at a more accurate level by describing the same object from different points of the view. As a result, the topic on multimodal RS data fusion has gradually emerged as a hotspot research direction in recent years. This paper aims at presenting an overview of multimodal RS data fusion in several mainstream applications, which can be roughly categorized by 1) image pansharpening, 2) hyperspectral and multispectral image fusion, 3) multimodal feature learning, and (4) crossmodal feature learning. For each topic, we will briefly describe what is the to-be-addressed research problem related to multimodal RS data fusion and give the representative and state-of-the-art models from shallow to deep perspectives.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    59
    References
    2
    Citations
    NaN
    KQI
    []