HyperNet: Self-Supervised Hyperspectral Spatial–Spectral Feature Understanding Network for Hyperspectral Change Detection

2022 
The fast development of self-supervised learning (SSL) lowers the bar learning feature representation from massive unlabeled data and has triggered a series of researches on change detection of remote sensing images. Challenges in adapting SSL from natural images classification to remote sensing images change detection arise from difference between the two tasks. The learned patch-level feature representations are not satisfying for the pixel-level precise change detection. In this article, we proposed a novel pixel-level self-supervised hyperspectral spatial–spectral feature understanding network (HyperNet) to accomplish pixelwise feature representation for effective hyperspectral change detection. Concretely, not patches but the whole images are fed into the network and the multitemporal spatial–spectral features are compared pixel by pixel. Instead of processing the 2-D imaging space and spectral response dimension in hybrid style, a powerful spatial–spectral attention module is put forward to explore the spatial correlation and discriminative spectral features of multitemporal hyperspectral images (HSIs), separately. Only the positive samples at the same location of bitemporal HSIs are created and forced to be aligned, aiming at learning the spectral difference-invariant features. Moreover, a new similarity loss function named focal cosine is proposed to solve the problem of imbalanced easy and hard positive samples comparison, where the weights of those hard samples are enlarged and highlighted to promote the network training. Six hyperspectral datasets have been adopted to test the validity and generalization of the proposed HyperNet. The extensive experiments demonstrate the superiority of HyperNet over the state-of-the-art algorithms on downstream hyperspectral change detection tasks.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    58
    References
    1
    Citations
    NaN
    KQI
    []