Explainable scale distillation for hyperspectral image classification

2022 
Abstract The land-covers within an observed remote sensing scene are usually of different scales; therefore, the ensemble of multi-scale information is a commonly used strategy to achieve more accurate scene interpretation; however, this process suffers from being time-consuming. In terms of this issue, this paper proposes a scale distillation network to explore the possibility that single-scale classification network can achieve the same (or even better) classification performance compared with multi-scale one. The proposed scale distillation network consists of a cumbersome multi-scale teacher network and a lightweight single-scale student network. The former is trained for multi-scale information learning, and the latter improves the classification accuracy by accepting the knowledge from the multi-scale teacher network and its true label. The experimental results show the advantages of scale distillation on hyperspectral image classification. The single-scale student network can even achieve higher evaluation accuracy than the multi-scale teacher network. In addition, a faithful explainable scale network is designed to visually explain the trained scale distillation network. The traditional deep neural network is a black-box and lacks interpretability. The explanation of the trained network can explore more hidden information from the predictions. We visually explain the prediction results of scale distillation network, and the results show that the explainable scale network can more precisely analyze the relationship between the learned scale features and the land-cover categories. Moreover, the possible application of the explainable scale network on classification is further discussed in this study.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    35
    References
    2
    Citations
    NaN
    KQI
    []