Constructing mutual context in human-robot collaborative problem solving with multimodal input

2020 
Abstract We describe a system that is designed for a human and a robot to solve problems in a shared space. This system uses speech and gesture to facilitate natural interactions with it. Our system uses a whiteboard-type architecture to represent and maintain information of different aspects in the problem-solving process. Our system represents information about a problem at a fairly high level, thereby making shared context accessible. We give an overview of the current status of our system and aspects currently under development. We explain the components of our system and provide details of the information produced by the various system components. Additionally, we explain how information is accumulated on the whiteboard and discuss and evaluate how various aspects of shared context are addressed by our system.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    1
    Citations
    NaN
    KQI
    []