Object Detection based on Fusing Monocular Camera and Lidar Data in Decision Level Using D-S Evidence Theory

2020 
Visual sensors such as optical camera possess fatal shortcomings like vulnerable to the weather condition and light intensity, which may cause catastrophic disasters. Fortunately, the prevalent 3D sensors like Light Detection and Ranging (Lidar) can overcome these drawbacks. Fusing both monocular camera and 3D Lidar data can often achieve a better performance than solely using one of them. In this paper, we propose an object detection approach by fusing monocular camera data and Lidar data in decision level based on Dempster-Shafer evidence theory. First, the 3D point cloud collected from Lidar is projected onto the image plane and the upsampling method is adopted to receive dense depth maps. Then, the RGB images and dense depth maps are separately fed into YOLOv3 detection framework to extract the object features. Next, the final detection attributes, the class confidence and bounding box, are generated by using Dempster’s combination rule and extracting intersection of the associated bounding boxes. Finally, the mean Average Precision (mAP) performance of the proposed method is improved by 0.5% and 3.1% compared with sigle camera method and Lidar method, respectively.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    20
    References
    0
    Citations
    NaN
    KQI
    []