Multi-task joint learning of 3D keypoint saliency and correspondence estimation

2021 
Abstract 3D keypoint detection is an essential problem in computer graphics and computer vision, especially for 3D shape analysis and model matching. In this paper, we propose a novel multi-task joint learning network architecture for 3D keypoint saliency estimation and correspondence estimation. To better capture the local and global features of the 3D model, we design a spatial multi-scale perception module that concatenates feature maps at different scales during the extraction of point cloud features. In the multi-task joint learning process, we obtain the offset vector of each point to the keypoint in the 3D model through a voting mechanism. This mechanism predicts the confidence value of each point in the 3D model and then filters out low-confidence points to generate a reliable voting result. Afterwards, keypoint saliency estimation is achieved through clustering. In parallel, keypoint correspondence estimation is learned by predicting the semantic label of the selected high-confidence points. In extensive evaluations, ablation studies and comparisons, we demonstrate that the proposed architecture can both efficiently and accurately detect the position and semantic labels of the 3D keypoints, which enables it to outperform state-of-the-art approaches for 3D keypoint detection.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    61
    References
    0
    Citations
    NaN
    KQI
    []