Rotation-Invariant Attention Network for Hyperspectral Image Classification

2022 
Hyperspectral image (HSI) classification refers to identifying land-cover categories of pixels based on spectral signatures and spatial information of HSIs. In recent deep learning-based methods, to explore the spatial information of HSIs, the HSI patch is usually cropped from original HSI as the input. And $3 \times 3$ convolution is utilized as a key component to capture spatial features for HSI classification. However, the $3 \times 3$ convolution is sensitive to the spatial rotation of inputs, which results in that recent methods perform worse in rotated HSIs. To alleviate this problem, a rotation-invariant attention network (RIAN) is proposed for HSI classification. First, a center spectral attention (CSpeA) module is designed to avoid the influence of other categories of pixels to suppress redundant spectral bands. Then, a rectified spatial attention (RSpaA) module is proposed to replace $3 \times 3$ convolution for extracting rotation-invariant spectral-spatial features from HSI patches. The CSpeA module, the $1 \times 1$ convolution and the RSpaA module are utilized to build the proposed RIAN for HSI classification. Experimental results demonstrate that RIAN is invariant to the spatial rotation of HSIs and has superior performance, e.g., achieving an overall accuracy of 86.53% (1.04% improvement) on the Houston database. The codes of this work are available at https://github.com/spectralpublic/RIAN .
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    62
    References
    2
    Citations
    NaN
    KQI
    []