Deep multisensor learning for missing-modality all-weather mapping

2021 
Abstract Multisensor Earth observation has significantly accelerated the development of multisensor collaborative remote sensing applications such as all-weather mapping using synthetic aperture radar (SAR) images and optical images. However, in the real-world application scenarios, not all data sources may be available, namely, the missing-modality problem, e.g., the poor imaging conditions obstruct the optical sensors, and only SAR images are available for the mapping. This real-world scenario raises the challenge of how to leverage historical multisensor data to improve the representation ability of the available model. The knowledge transfer based and knowledge distillation based approaches, as feasible solutions, can be used to transfer knowledge from other sensor models to the available model. However, these approaches suffer from the problem of forgotten knowledge and the multi-modality co-registration problem, which means that the leveraging of historical multisensor data is inefficient. The essential problem lies in the fact that these approaches are designed following the single-sensor data-driven approach. In this paper, a registration-free multisensor data-driven learning method, namely, deep multisensor learning, in a new perspective of knowledge retention, is proposed to overcome the above problems by learning a meta-sensory representation. To explore the existence of the meta-sensory representation, the meta-sensory representation hypothesis is first proposed, which reveals that the essential difference of the deep models trained on data from different sensors lies in the parameter distribution of the sensor-invariant and sensor-specific operations. Based on this hypothesis, a prototype network is proposed to learn the meta-sensory representation by modeling the knowledge retention mechanism, using the proposed difference alignment operation (DiffAlignOp). DiffAlignOp enables the prototype network to dynamically generate sensor-specific networks to gather supervised signals from registration-free multisensor data. This dynamic network generation is differentiable. Therefore, multisensor gradients can be obtained to learn the meta-sensory representation. To demonstrate the flexibility and practicality of deep multisensor learning, the application of all-weather mapping in a missing-modality scenario was performed. The experiments were conducted on a large public multisensor all-weather mapping dataset, which consists of high-resolution optical and SAR imagery with a spatial resolution of 0.5 m. The experimental results suggest that deep multisensor learning is superior to the other learning approaches in performance and stability, and reveals the importance of meta-sensory representation in multisensor remote sensing applications.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    28
    References
    9
    Citations
    NaN
    KQI
    []