Predictive Medicine Using Interpretable Recurrent Neural Networks

2021 
Deep learning has been revolutionizing multiple aspects of our daily lives, thanks to its state-of-the-art results. However, the complexity of its models and its associated difficulty to interpret its results has prevented it from being widely adopted in healthcare systems. This represents a missed opportunity, specially considering the growing volumes of Electronic Health Record (EHR) data, as hospitals and clinics increasingly collect information in digital databases. While there are studies addressing artificial neural networks applied to this type of data, the interpretability component tends to be approached lightly or even disregarded. Here we demonstrate the superior capability of recurrent neural network based models, outperforming multiple baselines with an average of 0.94 test AUC, when predicting the use of non-invasive ventilation by Amyotrophic Lateral Sclerosis (ALS) patients, while also presenting a comprehensive explainability solution. In order to interpret these complex, recurrent algorithms, the robust SHAP package was adapted, as well as a new instance importance score was defined, to highlight the effect of feature values and time series samples in the output, respectively. These concepts were then combined in a dashboard, which serves as a proof of concept in terms of a AI-enhanced detailed analysis tool for medical staff.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []