Significance, Relevance and Explainability in the Machine Learning Age: An Econometrics and Financial Data Science Perspective

2020 
Although machine learning is frequently associated with neural networks, it also comprises econometric regression approaches and other statistical technique whose accuracy enhances with increasing observation. What constitutes high quality machine learning is yet unclear though. Proponents of deep learning (i.e. neural networks) value computation efficiency over human interpretability and tolerate the “black box” appeal of their algorithms, whereas proponents of explainable artificial intelligence (xai) employ traceable “white box” methods (e.g. regressions) to enhance explainability to human decision makers. We extend Brooks et al.’s (2019) work on significance and relevance as assessment critieria in econometrics and financial data science to contribute to this debate. Specifically, we identify explainability as the Achilles heel of classic machine learning approaches such as neural networks, which are not fully replicable, lack transparency and traceability and therefore do not permit any attempts to establish causal inference. We conclude suggesting routes for future research to advance the design and efficiency of “white box” algorithms.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []