Multi-Classes and Motion Properties for Concurrent Visual SLAM in Dynamic Environments

2021 
Working in a dynamic environment is a challenging problem for visual simultaneous localization and mapping (visual SLAM). Most of the existing visual SLAM algorithms fail resulting in significant error or losing in tracking when moving objects dominate the scene. We found two reasons cause these issues: (i) Previous approaches use information from all regions in the image; (ii) Existing algorithms use just two groups and block all feature points from moveable objects . In this paper, we propose a novel Multi-classes and motion properties for Concurrent Visual SLAM (MCV-SLAM) algorithm, which defines classes into five categories and concurrently fuses prior knowledge and observation of moving objects with semantic segmentation to ensure visual SLAM works properly for dynamic environments in real time. We also propose an adaptive method to optimize camera pose by using more potential inlier feature points with continuous weights, while eliminating the impact of moving objects. Our experiments are performed on public datasets of both indoor and outdoor scenes with moving objects in dynamic environments. The experimental results demonstrate that our method outperforms previous works with greater robustness and smaller tracking errors, and our MCV-SLAM can deal with the situations (i.e., the dominance of moving objects, lack of matching points), which lead misestimating occurs in existing SLAMs.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []