The Emperor’s New Clothes? Transparency and Trust in Machine Learning for Clinical Neuroscience

2021 
Machine learning (ML) constitutes the backbone of many applications of artificial intelligence. In the field of clinical neuroscience, applying ML to neuroimaging data promises wide-ranging advancements. Yet, such potential diagnostic and predictive tools pose new challenges with regard to old problems of transparency and trust. After all, the very design of many ML applications can preclude comprehensive explanations of its inner workings and impede accurate predictions about its future behavior, supposedly clashing with the ideal of transparency. It is often claimed that these shortcomings, inherent to many ML applications, are detrimental to their trustworthiness and thus hinder implementing new and potentially beneficial techniques. In this chapter, I will argue against beliefs that inextricably link transparency and trustworthiness. Drawing in particular on the framework of the British philosopher and bioethicist Onora O’Neill, I aim to show why, contrary to many intuitions, an obsession with transparency can be detrimental to tackling more fundamental ethical issues—and that hence transparency may not solve as many challenges for clinical ML applications as is usually assumed. I will conclude with a tentative suggestion on how to move forward from a practical point of view as to advance the trustworthiness of ML for clinical neuroscience.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    61
    References
    0
    Citations
    NaN
    KQI
    []