S3ANet: Spectral-spatial-scale attention network for end-to-end precise crop classification based on UAV-borne H2 imagery

2022 
Abstract High spatial and spectral resolution (H2) imagery collected by unmanned aerial vehicle (UAV) systems is an important data source for precise crop classification. Although this data source can provide us with abundant information about the crops of interest, it also introduces new challenges for the image processing. Specifically, the spectral similarities of green crops lead to small inter-class distances, and the severe intra-class spectral variability and high spatial heterogeneity in H2 imagery increases the difficulty of precise classification. In addition, the scales of the different crop plots can show great differences, which makes it difficult to determine the optimal patch size for deep learning based classification models. In this paper, a spectral-spatial-scale attention network (S3ANet) is proposed for H2 imagery based precise crop classification. In the proposed method, each channel, each pixel, and each scale perception of the feature map is adaptively weighted to relieve the intra-class spectral variability, the spatial heterogeneity, and the scale difference of the crop plots, respectively. Furthermore, the proposed S3ANet method introduces the additive angular margin loss function to further increase the inter-class distances between the different crops, and reduce the misclassification effect. S3ANet was verified using the public WHU-Hi UAV-borne hyperspectral dataset and the new WHU-Hi-JiaYu dataset, which is a dataset for precise rice classification that was built by the authors. In these experiments, the overall accuracy of the proposed S3ANet method all exceeds 96% under 50 training pixels per class, and it achieved significant improvement compared with some state-of-the-art hyperspectral images classifiers (such as SSRN, CNNCRF and FPGA, etc.). The code of S3ANet is available at http://rsidea.whu.edu.cn/resource_sharing.htm .
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    44
    References
    0
    Citations
    NaN
    KQI
    []