Real-Time Implementation of the Sparse Multinomial Logistic Regression for Hyperspectral Image Classification on GPUs
2015
In this letter, a real-time implementation of the logistic regression via variable splitting and augmented Lagrangian (LORSAL) algorithm for sparse multinomial logistic regression is presented on commodity graphics processing units (GPUs) using Nvidia's compute unified device architecture. The proposed parallel method properly exploits the GPU architecture at the low level, including its shared memory, and takes full advantage of the computational power of GPUs to achieve real-time classification performance of hyperspectral images for the first time in the hyperspectral imaging literature. Our experimental results reveal remarkable acceleration factors and real-time performance, while retaining exactly the same classification accuracy with regard to the serial and multicore versions of the classifier.
Keywords:
- Artificial intelligence
- Augmented Lagrangian method
- Contextual image classification
- Multi-core processor
- Computer vision
- Logistic regression
- Multinomial logistic regression
- Shared memory
- Architecture
- General-purpose computing on graphics processing units
- Computer science
- Hyperspectral imaging
- Kernel (linear algebra)
- Classifier (linguistics)
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
12
References
24
Citations
NaN
KQI