Hyper-Embedder: Learning A Deep Embedder for Self-supervised Hyperspectral Dimensionality Reduction

2021 
Hyperspectral imaging has attracted growing interest among the researchers from the geoscience and remote sensing fields owing to its very rich spectral information. However, the high spectral dimensionality of hyperspectral images tends to suffer from information redundancy. Manifold embedding is a mainstream strategy of nonlinear hyperspectral dimensionality reduction. The sensitivity to the noise and the inflexibility to the out-of-sample problem (i.e., new samples) are the main drawbacks of the manifold embedding based methods. To this end, we propose to learn a deep embedder in a self-supervised fashion for hyperspectral dimensionality reduction, called Hyper-Embedder. Hyper-Embedder effectively reduces the computational complexity and storage-costing compared to conventional embedding models, and improves the robustness against various noises, e.g., spectral variabilities. More significantly, Hyper-Embedder is capable of learning an explicit nonlinear mapping to make a one-to-one match between each original pixel (spectral signature) in the hyperspectral image and its dimension-reduced representation. These low-dimensional representations can be generated and given by existing and classic nonlinear manifold embedding methods. In this paper, we attempt to learn the correspondence or mapping by optimizing a deep regression network. The to-be-developed network can not only capture the local topological knowledge graph of all spectral signatures of hyperspectral data but be applicable to fast prediction and inference of samples from other hyperspectral scenes. The proposed Hyper-Embedder outperforms existing state-of-the-art hyperspectral dimensionality reduction algorithms on two commonly-used hyperspectral datasets, i.e., Indian Pines and Augsburg scenes.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []