MAGE: Multisource Attention Network With Discriminative Graph and Informative Entities for Classification of Hyperspectral and LiDAR Data

2022 
Land use and land cover (LULC) classification plays a significant role in Earth observation tasks. Nowadays, we can observe the same scene with multiple heterogeneous sensors. Combining diverse information therein for multisource joint classification has become a promising research topic in the remote sensing community. For example, the fusion of hyperspectral image (HSI) and lidar detection and ranging (LiDAR) data has been under active research. The current methodology for HSI and LiDAR joint classification tends to ignore the topological relationship between pixels, limiting the effectiveness of feature extraction and fusion. Another obstacle to satisfactory performance is the scarcity of annotated data. To overcome the above challenges, this article proposes a multisource attention network called multisource attention network with discriminative graph and informative entities (MAGE) to improve the collective classification. We use a semi supervised graph transductive module to underline the relevance among pixels by explicitly constructing a multimodal adjacency matrix. Specifically, MAGE designs a self-supervised feature extraction module for pretraining, mitigating the dependence on annotated samples and alleviating the common overfitting and over-smoothing problems encountered by the deep graph neural network (GNN). The experimental results of three standard datasets, i.e., MUUFL, Trento, and Houston, demonstrate the effectiveness of the proposed approach. In particular, MAGE achieves an overall accuracy of 95.26% and an average accuracy of 96.27% on the challenging MUUFL dataset, surpassing the state-of-the-art (SOTA) methods. The code and models are publicly available at https://github.com/d1x1u/MAGE .
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []