Self-localization using sporadic features

2002 
Abstract Knowing its position in an environment is an essential capability for any useful mobile robot. Monte Carlo Localization (MCL) has become a popular framework for solving the self-localization problem in mobile robots. The known methods exploit sensor data obtained from laser range finders or sonar rings to estimate robot positions and are quite reliable and robust against noise. An open question is whether comparable localization performance can be achieved using only camera images, especially if the camera images are used both for localization and object recognition. In such a situation, it is both harder to obtain suitable models for predicting sensor readings and to correlate actual with predicted sensor data. Both problems can be easily solved if localization is based on features obtained by preprocessing images. In our image-based MCL approach, we combine visual distance features and visual landmark features, which have different characteristics. Distance features can always be computed, but have value-dependent and dramatically increasing margins for noise. Landmark features give good directional information, but are detected only sporadically. In our paper, we discuss the problems arising from these characteristics and show experimentally that MCL nevertheless works very well under these conditions.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    10
    References
    10
    Citations
    NaN
    KQI
    []