Investigating Trust in Human-Machine Learning Collaboration: A Pilot Study on Estimating Public Anxiety from Speech

2021 
Trust is a key element in the development of effective collaborative relationships between humans and increasingly complex artificial intelligence (AI) systems. Here, we examine trust in AI in the context of a human-AI partnership that involves a joint decision making task for estimating levels of public speaking anxiety based on speech signals. The AI system is comprised of an explainable machine learning (ML) algorithm, that takes acoustic characteristics as input and outputs the estimate of public speaking anxiety levels, a local explanation about the most important features that contributed to the decision of each speech sample, and a global explanation about the most important features for the data overall. We analyze interactions between AI and human annotators with background in psychological sciences, and measure trust over time via the annotators’ agreement with the AI model and the annotators’ self-reports. We further examine factors of trust that are related to the characteristics of the human annotator and the ML algorithm. Results indicate that trust in AI depends on the openness level of the annotator and the importance level of input features. Findings from this study can provide guidelines to designing solutions that properly calibrate human trust in AI in collaborative human-AI tasks.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    20
    References
    0
    Citations
    NaN
    KQI
    []