Robust Dense Visual Odometry with boundary pixel suppression

2016 
Pose estimation and 3D environment reconstruction are crucial for autonomous navigation in mobile robotics. Robust dense visual odometry based on a RGB-D sensor uses all pixels to estimate frame-to-frame motion by minimizing the photometric and geometric error. 3D coordinates of each pixel are calculated necessarily with its corresponding depth. However, depths of some pixels near object boundaries from RGB-D sensors are not accurate. The general robust dense visual odometry does not consider depth noise impact for photometric error and geometric error. In this paper, we construct uncertainties of photometric error and geometric error with depth noise and point out depth noise near object boundaries can significantly affect the result of motion estimation. We present a modified robust dense visual odometry with boundary pixel suppression. Publicly available benchmark datasets are employed to evaluate our system, and results showed that our method achieves higher accuracy compared with the state-of-the-art Dense Visual Odometry (DVO).
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    18
    References
    0
    Citations
    NaN
    KQI
    []