Modeling robot co-representation: state-of-the-art, open issues, and predictive learning as a possible framework

2020 
Robots are getting increasingly more present in many spheres of human life, making the need for robots that can successfully engage in natural social interactions with humans paramount. Successful human-robot interaction could be achieved more effectively if robots could act predictably and could predict the humans' actions. If robots could represent human partners and generate behaviors that are in line with the partners' expectations based on human's mental models of interdependent action, human agents would be able to apply predictive and adaptive mechanisms acquired in human interactions to interact with robots effectively. How could robots be predictable and be capable of predicting human behavior? We propose that this could be achieved by having an internal representation of both oneself and the other agent, that is by equipping the robot with the ability to co-represent. Here, co-representation refers to the representation of the partner's actions alongside one's own actions. Although co-representation constitutes an essential process for successful human social interaction, as it supports understanding of others' actions, to date co-representation processes have only scarcely been integrated into robotic platforms. We highlight the state-of-the-art findings on co-representation in social robotics, discuss current research limitations and open issues for creating computational models of co-representation in robots, and put forward the idea that predictive learning might constitute a particularly promising framework to build models of co-representing robots. Overall, in this article, we offer an integrated view of the state-of-the-art findings in robotics literature on co-representation and outline directions for future research, with the aim to boost success in building robots equipped with co-representation models fit for smooth social interactions.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    39
    References
    4
    Citations
    NaN
    KQI
    []