Temporal Point Cloud Fusion With Scene Flow for Robust 3D Object Tracking

2022 
Non-visual range sensors such as Lidar have shown the potential to detect, locate and track objects in complex dynamic scenes thanks to their higher stability in comparison with vision-based sensors like cameras. However, due to the disorder, sparsity, and irregularity of the point cloud, it is much more challenging to take advantage of the temporal information in the dynamic 3D point cloud sequences, as it has been done in the image sequences for improving detection and tracking. In this paper, we propose a novel scene-flow-based point cloud feature fusion module to tackle this challenge, based on which a 3D object tracking framework is also achieved to exploit the temporal motion information. Moreover, we carefully designed several training schemes that contribute to the success of this new module by eliminating the issues of overfitting and long-tailed distribution of object categories. Extensive experiments on the public KITTI 3D object tracking dataset demonstrate the effectiveness of the proposed method by achieving superior results to the baselines.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    33
    References
    0
    Citations
    NaN
    KQI
    []