Denoising applied to spectroscopies – Part II: Decreasing computation time

2019 
AbstractSpectroscopies are of fundamental importance but can suffer from low sensitivity. Singular value decomposition (SVD) is a highly interesting mathematical tool, which can be conjugated with low-rank approximation to denoise spectra and increase sensitivity. SVD is also involved in data mining with principal component analysis (PCA). In this paper, we focused on the optimization of SVD duration, which is a time-consuming computation. Both Intel processors (CPU) and Nvidia graphic cards (GPU) were benchmarked. A 100 times gain was achieved when combining divide and conquer algorithm, Intel Math Kernel Library (MKL), SSE3 (Streaming SIMD Extensions) hardware instructions and single precision. In such a case, the CPU can outperform the GPU driven by CUDA technology. These results give a strong background to optimize SVD computation at the user scale.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    62
    References
    3
    Citations
    NaN
    KQI
    []