Do We Take a Robot's Needs into Account? The Effect of Humanization on Prosocial Considerations Toward Other Human Beings and Robots.

2020 
Robots are becoming an integral part of society, yet the extent to which we are prosocial toward these nonliving objects is unclear. While previous research shows that we tend to take care of robots in high-risk, high-consequence situations, this has not been investigated in more day-to-day, low-consequence situations. Thus, we utilized an experimental paradigm (the Social Mindfulness "SoMi" paradigm) that involved a trade-off between participants' own interests and their willingness to take their task partner's needs into account. In two experiments, we investigated whether participants would take the needs of a robotic task partner into account to the same extent as when the task partner was a human (Study I), and whether this was modulated by participant's anthropomorphic attributions to said robot (Study II). In Study I, participants were presented with a social decision-making task, which they performed once by themselves (solo context) and once with a task partner (either a human or a robot). Subsequently, in Study II, participants performed the same task, but this time with both a human and a robotic task partner. The task partners were introduced via neutral or anthropomorphic priming stories. Results indicate that the effect of humanizing a task partner indeed increases our tendency to take someone else's needs into account in a social decision-making task. However, this effect was only found for a human task partner, not for a robot. Thus, while anthropomorphizing a robot may lead us to save it when it is about to perish, it does not make us more socially considerate of it in day-to-day situations.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    26
    References
    2
    Citations
    NaN
    KQI
    []