Parallel Implementation of Sparse Representation Classifiers for Hyperspectral Imagery on GPUs

2015 
Classification is one of the most important analysis techniques for hyperspectral image analysis. Sparse representation is an extremely powerful tool for this purpose, but the high computational complexity of sparse representation-based classification techniques limits their application in time-critical scenarios. To improve the efficiency and performance of sparse representation classification techniques for hyperspectral image analysis, this paper develops a new parallel implementation on graphics processing units (GPUs). First, an optimized sparse representation model based on spatial correlation regularization and a spectral fidelity term is introduced. Then, we use this approach as a case study to illustrate the advantages and potential challenges of applying GPU parallel optimization principles to the considered problem. The first GPU optimization algorithm for sparse representation classification (SRCSC_P) of hyperspectral images is proposed in this paper, and a parallel implementation of the proposed method is developed using compute unified device architecture (CUDA) on GPUs. The GPU parallel implementation is compared with the serial and multicore implementations on CPUs. Experimental results based on real hyperspectral datasets show that the average speedup of SRCSC_P is more than $\mathbf{130} \times$ , and the proposed approach is able to provide results accurately and fast, which is appealing for computationally efficient hyperspectral data processing.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    46
    References
    26
    Citations
    NaN
    KQI
    []