Geometric Context Sensitive Loss and Its Application for Nonrigid Structure from Motion.

2021 
Coordinate prediction is an important signal processing task, which aims to predict 2D or 3D coordinates from a single image or a series of images. Previous techniques utilize Mean Square Error (MSE) loss function to measure the difference between the prediction coordinates and its corresponding ground-truth in training step, which usually assumes that coordinates are independent to each other without considering their correlations, neglecting the geometric context of the object. To address the issue, this paper presents a novel loss function named Geometric Context Sensitive (GCS) loss to model the geometric shape context of general objects by measuring the difference between any pair of prediction coordinates and their corresponding ground-truth coordinates. Our proposed method has several advantages: (1) The proposed GCS loss is trainable and can be optimized by Gauss-Newton in the traditional models. (2) GCS loss can be formulated in both 2D and 3D forms. Thus, the proposed GCS loss is easy to be implemented and can be integrated into current popular 2D/3D coordinates prediction models naturally and effectively, e.g., nonrigid structure from motion. (3) No additional learnable parameters. Though the proposed GCS loss is intuitive in theory, extensive experimental results on several public NRSFM datasets show that GCS loss can significantly boost the performance. We share the implementation code and models of our proposed methods at https://github.com/nianfudong/GCS/tree/master/NRSFM.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    18
    References
    0
    Citations
    NaN
    KQI
    []