Informing a Robot of Object Location with Both Hand-Gesture and Verbal Cues

2003 
Recently, many kinds of robots are developed, and there are a lot of robots which work in human living space. One of the most important interactions between a robot and human is when a human informs a robot of an object’s location. The purpose of this work is to make an interface for informing a robot of object location in a human living space with several objects. We assume that the robot has found a user by sound source localization. At the beginning, the robot recognizes pointing gesture and verbal cues of the user, and detects candidates of object location. The system recognizes pointing direction by a stereo camera, and recognizes verbal cues. The direction of the pointing gesture and the directive word are used to restrict the searching space. When multiple object candidates are detected, the system asks the user for additional features such as color name or relative location among those, and then finds one of them. We have conducted experiments on a dialog task. There were three objects in the searching space. The system is able specify the object by dialog, after which, the robot moves toward it.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    2
    References
    5
    Citations
    NaN
    KQI
    []