LiCaS3: A Simple LiDAR–Camera Self-Supervised Synchronization Method

2022 
Recent advances in robotics and deep learning demonstrate promising 3-D perception performances via fusing the light detection and ranging (LiDAR) sensor and camera data, where both spatial calibration and temporal synchronization are generally required. While the LiDAR–camera calibration problem has been actively studied during the past few years, LiDAR–camera synchronization has been less studied and mainly addressed by employing a conventional pipeline consisting of clock synchronization and temporal synchronization. The conventional pipeline has certain potential limitations, which have not been sufficiently addressed and could be a bottleneck for the potential wide adoption of low-cost LiDAR–camera platforms. Different from the conventional pipeline, in this article, we propose the LiCaS3, the first deep-learning-based framework, for the LiDAR–camera synchronization task via self-supervised learning. The proposed LiCaS3 does not require hardware synchronization or extra annotations and can be deployed both online and offline. Evaluated on both the KITTI and Newer College datasets, the proposed method shows promising performances. The code will be publicly available at https://github.com/KleinYuan/LiCaS3 .
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []