Vision-based Monte Carlo self-localization for a mobile service robot acting as shopping assistant in a home store

2002 
We present a novel omnivision-based robot localization approach which utilizes the Monte Carlo Localization (MCL), a Bayesian filtering technique based on a density representation by means of particles. The capability of this method to approximate arbitrary likelihood densities is a crucial property for dealing with highly ambiguous localization hypotheses as are typical for real-world environments. We show how omnidirectional imaging can be combined with the MCL-algorithm to globally localize and track a mobile robot given a taught graph-based representation of the operation area. In contrast to other approaches, the nodes of our graph are labeled with both visual feature vectors extracted from the omnidirectional image, and odometric data about the pose of the robot at the moment of the node insertion (position and heading direction). To demonstrate the reliability of our approach, we present first experimental results in the context of a challenging robotics application, the self-localization of a mobile service robot acting as shopping assistant in a very regularly structured, maze-like and crowded environment, a home store.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    11
    References
    55
    Citations
    NaN
    KQI
    []