A Methodology for Principled Approximation in Visual SLAM

2020 
This paper proposes a methodology for exploiting approximate computing to reduce the time and energy requirements of Simultaneous Localization and Mapping (SLAM) algorithms, which are used in important problem domains like robotics and autonomous driving in which autonomous agents navigate through unknown environments. Algorithms for SLAM use sensors to probe the environment, integrate this information into a map of the surroundings (mapping), and determine where the agent is in this map (localization). Visual SLAM algorithms use cameras as sensors. They can be used in places where GPS information is not available, %such as inside buildings, but they have high computational requirements, leading to poor performance and high energy usage on embedded platforms.Existing studies of approximation in SLAM have mostly used offline control, which requires the trajectory be known before the agent starts to move. This is not realistic in most SLAM applications. In this paper, we present a general methodology for applying principled online approximation to visual SLAM algorithms. We implemented our proposed methodology in four visual SLAM algorithms (including one visual inertial SLAM algorithm) and evaluated them on several platforms. Our experimental results show that across different algorithms and platforms, our proposed methodology results in savings of up to 77% and 40% in computation time and energy consumption respectively with acceptable quality loss in localization and mapping accuracy over a variety of inputs.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    62
    References
    2
    Citations
    NaN
    KQI
    []