Augmenting medical diagnosis decisions? An investigation into physicians’ decision making process with artificial intelligence

2020 
Systems based on artificial intelligence (AI) increasingly support physicians in diagnostic decisions. Compared to rule-based systems, however, these systems are less transparent and their errors less predictable. Much research currently aims to improve AI technologies and debates their societal implications. Surprisingly little effort is spent on understanding the cognitive challenges of decision augmentation with AI-based systems although these systems make it more difficult for decision makers to evaluate the correctness of system advice and to decide if to reject or accept it. As little is known about the cognitive mechanisms that underlie such evaluations, we take an inductive approach to understand how AI advice influences physicians’ decision making process. We conducted experiments with a total of 68 novice and 12 experienced physicians who diagnosed patient cases with an AI-based system that provided both correct and incorrect advice. Based on qualitative data from think-aloud protocols, interviews, and questionnaires, we elicit five decision making patterns and develop a process model of medical diagnosis decision augmentation with AI advice. We show that physicians use distinct metacognitions to monitor and control their reasoning while assessing AI advice. These metacognitions determine whether physicians are able to reap the full benefits of AI or not. Specifically, wrong diagnostic decisions often result from shortcomings in utilizing metacognitions related to decision makers’ own reasoning (self-monitoring) and metacognitions related to the AI-based system (system-monitoring). As a result, physicians fall for decisions based on beliefs rather than actual data or engage in unsuitably superficial information search. Our findings provide a first perspective on the metacognitive mechanisms that decision makers use to evaluate system advice. Overall, our study sheds light on an overlooked facet of decision augmentation with AI, namely the crucial role of human actors in compensating for technological errors.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    63
    References
    5
    Citations
    NaN
    KQI
    []