Can Robots Be Bullied? A Crowdsourced Feasibility Study for Using Social Robots in Anti-Bullying Interventions

2021 
Bullying in schools is a serious issue with severe and long-term consequences. We explore using social robots in anti-bullying programs to encourage children to intervene in bullying of their peers. To that end, we have conducted a crowdsourced study to explore the feasibility of using robots in the context of bullying (i.e., to investigate whether robots are perceived as entities that can be bullied). We present qualitative and quantitative results from a between-subjects video study, comparing robot bullying (robots being bullied) to human bullying (humans being bullied). Our findings suggest that while the majority of participants describe both instances with connotations of wrongness and immorality, they use different cognitive mechanisms for moral disengagement with robot bullying vs human bullying. We also found significant differences in participants’ perceptions of each scenario, including associating robot mistreatment with bullying less strongly, and being less willing to intervene in it. This work contributes insights for understanding how people perceive bullying of robots, designing intelligent behaviors to discourage bullying of robots, and to our long-term goal of developing anti-bullying pedagogical programs that use social robots.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    24
    References
    0
    Citations
    NaN
    KQI
    []