Lightweight Tensor Attention-Driven ConvLSTM Neural Network for Hyperspectral Image Classification

2021 
Recurrent neural networks, especially the convolutional long short-term memory (ConvLSTM), have attracted plenty of attention and shown promising results due to their ability in modeling long-term dependencies in many research fields. In this paper, a lightweight tensor attention-driven ConvLSTM neural network (TACLNN) is proposed for hyperspectral image (HSI) classification. Firstly, to reduce the trainable parameters and memory requirements of ConvLSTM (specifically, the 2-D version of LSTM, i.e., ConvLSTM2D), a lightweight ConvLSTM2D cell is developed by utilizing tensor-train decomposition, resulting in a TT-ConvLSTM2D cell, with which a spatial-spectral TT-ConvLSTM 2-D neural network (SSTTCL2DNN) is built. However, it is inevitable for SSTTCL2DNN to obtain lower accuracies for HSI classification. To recover the accuracy loss caused by the TT-ConvLSTM2D cell in SSTTCL2DNN, a learnable tensor attention residual block (TARB) module is built to further enhance its geometrical structure. When applied to three widely used HSI benchmarks, the proposed TACLNN model outperforms several state-of-the-art methods for HSI classification. In addition, the proposed TACLNN can effectively reduce the number of parameters and storage requirements achieving higher classification accuracies as compared to other competitive baselines.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    50
    References
    5
    Citations
    NaN
    KQI
    []