The visual homing problem: an example of robotics/biology cross fertilization

2000 
Abstract In this paper, we describe how a mobile robot under simple visual control can retrieve a particular goal location in an open environment. Our model neither needs a precise map nor to learn all the possible positions in the environment. The system is a neural architecture inspired by neurobiological analysis of how visual patterns named landmarks are recognized. The robot merges these visual informations and their azimuth to build a plastic representation of its location. This representation is used to learn the best movement to reach the goal. A simple and fast on-line learning of a few places located near the goal allows this goal to be reached from anywhere in its neighborhood. The system uses only a very rough representation of the robot environment and presents very high generalization capabilities. We describe an efficient implementation of autonomous and motivated navigation tested on our robot in real indoor environments. We show the limitations of the model and its possible extensions.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    62
    References
    75
    Citations
    NaN
    KQI
    []