Three-Dimensional Object Co-Localization From Mobile LiDAR Point Clouds

2021 
Recently, 3D deep learning technologies require a large amount of supervised 3D point-cloud data to learn statistical models for various ITS-related tasks, e.g. object classification, object detection, object segmentation, etc. However, manually annotating 3D point-cloud data is time-consuming and labor-intensive. Therefore, this paper aims at co-locating 3D objects from mobile LiDAR point clouds without any help of supervised training data. To realize it, we propose a new framework to implement 3D object co-localization for automatically extracting the objects of the same category from different point-cloud scenes. Specifically, to search and exploit the co-information from objects in different point-cloud scenes, we formulate a 3D object co-localization problem as a maximal subgraph matching problem. During the graph construction procedure, to handle the inconsistent representation of objects in different scenes, we propose a multi-scale clustering method to represent objects by a pyramid structure. In addition, because the maximal subgraph matching problem is NP-hard, we propose a stochastic search algorithm to generate the co-localization results. Extensive experiments on the point-cloud data collected by the Reigl VMX450 mobile LiDAR system demonstrate the promising performance of the proposed framework.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    44
    References
    0
    Citations
    NaN
    KQI
    []