Performance Evaluation of Gradient-based Dimensionality Reduction Methods on Different Devices

2020 
In this paper, we study the efficiency of the nonlinear mapping dimensionality reduction technique based on the stochastic and classic gradient descent algorithms. In particular, we implemented these algorithms using CUDA for NVIDIA GPU, HIP for AMD GPU, and OpenMP with AVX2 for CPU. We study algorithms for different volumes of data, different parameter values (gradient step size), and various computing devices, including AMD Radeon Vega 56, NVIDIA GeForce GTX 1080 Ti GPUs, and AMD Ryzen 7 3700X CPU. We found that the effectiveness of GPUs grows with the number of points. Using AMD Radeon Vega 56, we achieved 3.5 and 6.75 times performance improvement over 16-threaded CPU implementation for classic and stochastic gradient descent, respectively. All the experiments are carried out using a synthetic dataset.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    7
    References
    0
    Citations
    NaN
    KQI
    []