Realization of CUDA-based real-time multi-camera visual SLAM in embedded systems

2019 
The real-time capability of multi-camera visual simultaneous localization and mapping (SLAM) in embedded systems is vital for robotic autonomous navigation. However, owing to the incredibly time-consuming feature extraction, multi-camera visual SLAM has high computational complexity and is difficult to run in real-time in embedded systems. This study proposes a central processing unit and graphics processing unit (CPU–GPU) combination acceleration strategy for multi-camera visual SLAM to solve the computational complexity problem, improve computational efficiency, and realize real-time running in embedded systems. First, the GPU-based feature extraction acceleration algorithm is introduced for multi-camera visual SLAM to accelerate the time-consuming feature extraction by using compute unified device architecture to parallelize feature extraction algorithm. Then, a CPU-based multi-threading pipelining method that conducts image reading, feature extraction, and tracking concurrently is proposed to improve the computational efficiency of multi-camera visual SLAM by solving the load imbalance problem caused by GPU use and improving the use of computing resources. Extensive experiment results demonstrate that the improved multi-camera visual SLAM has a speed of 15 frames per second in embedded systems and meets the real-time requirement. Moreover, the improved multi-camera visual SLAM is three times faster than the original CPU-based method. Our open-source code can be found online: https://github.com/CASHIPS-ComputerVision.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    34
    References
    2
    Citations
    NaN
    KQI
    []