AutoNAS: Automatic Neural Architecture Search for Hyperspectral Unmixing

2022 
Due to the powerful and automatic representation capabilities, deep learning (DL) techniques have made significant breakthroughs and progress in hyperspectral unmixing (HU). Among the DL approaches, autoencoders (AEs) have become a widely used and promising network architecture. However, these AE-based methods heavily rely on manual design and may not be a good fit for specific datasets. To unmix hyperspectral images more intelligently, we propose an automatic neural architecture search model for HU, AutoNAS for short, to determine the optimal network architecture by considering channel configurations and convolution kernels simultaneously. In AutoNAS, the self-supervised training mechanism based on hyperspectral images is first designed for generating the training samples of the supernet. Then, the affine parameter sharing strategy is adopted by applying different affine transformations on the supernet weights in the training phase, which enables finding the optimal channel configuration. Furthermore, on the basis of the obtained channel configuration, the evolutionary algorithm with additional computational constraints is introduced into networks to achieve flexible convolution kernel search by evaluating unmixing results of different architectures in the supernet. Extensive experiments conducted on four hyperspectral datasets demonstrate the effectiveness and superiority of the proposed AutoNAS in comparison with several state-of-the-art unmixing algorithms.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    81
    References
    0
    Citations
    NaN
    KQI
    []