Building and optimization of 3D semantic map based on Lidar and camera fusion

2020 
Abstract When considering the robot application of the complex scenarios, the traditional geometric maps are insufficient because of the lack of interactions with the environment. In this paper, a three-dimensional (3D) semantic map with large-scale and accurate integrating Lidar and camera information is presented to achieve real-time road scenes. Firstly, simultaneous localization and mapping (SLAM) is performed to locate the robot position with the multi-sensor fusion of the Lidar and inertial measurement unit (IMU), and the map of the surrounding scenes is constructed while the robot is moving. Moreover, a convolutional neural networks (CNNs)-based semantic segmentation of images is employed to develop the semantic map of the environment. Following the synchronization of the time and space, the sensor fusion of Lidar and camera are used to generate the semantic labeled frame of point clouds and then create a semantic map in term of the posture. Besides, improving the capacity of classification, a higher-order 3D full connection conditional random fields (CRFs) method is utilized to optimize the semantic map. Finally, extensive experiment results evaluated on the KITTI dataset have illustrated the effectiveness of the proposed method.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    16
    References
    15
    Citations
    NaN
    KQI
    []