Unsupervised Spatial-Spectral CNN-Based Feature Learning for Hyperspectral Image Classification

2022 
The rapid development of remote sensing sensors makes the acquisition, analysis, and application of hyperspectral images (HSIs) more and more extensive. However, the limited sample sets, high-dimensional features, highly correlated bands, and mixing spectral information make the classification of HSIs a great challenge. In this article, an unsupervised multiscale and diverse feature learning (UMsDFL) method is proposed for HSI classification, which deeply considers the spatial–spectral features via convolutional neural networks (CNNs). Specifically, after employing the simple noniterative clustering (SNIC) algorithm with the heuristic calculation of superpixel size, the HSIs are segmented into superpixels for feature learning. The unsupervised network is designed with the convolutional encoder and decoder, the additional clustering branch, and the multilayer feature fusion to enhance the distinguishability of feature learning and the reusability of feature maps. Then, the spatial relationships and object attributes in large- and small-scale contexts are learned collaboratively through the unsupervised network to utilize the complementary multiscale characteristics. Moreover, the diverse features of hyperspectral information and nonsubsampled contourlet transform (NSCT) textures are learned simultaneously via the unsupervised network to alleviate the insufficiency of geometric representation. Finally, the random forest (RF) is adopted as the comprehensive classifier for land cover mapping based on the UMsDFL, and superpixel regularization is adopted to optimize the classification results. A series of experiments are performed on three real-world HSI datasets to demonstrate the effectiveness of our UMsDFL approach. The experimental results show that the proposed UMsDFL can achieve the overall accuracy of 79.23%, 96.49%, and 77.26% for Houston, Pavia, and Dioni datasets, respectively, when there are only five samples per class for training.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    2
    Citations
    NaN
    KQI
    []