Robust model benchmarking and bias-imbalance in data-driven materials science: a case study on MODNet

2021 
As the number of novel data-driven approaches to material science continues to grow, it is crucial to perform consistent quality, reliability and applicability assessments of model performance. In this paper, we benchmark the Materials Optimal Descriptor Network (MODNet) method and architecture against the recently released MatBench v0.1, a curated test suite of materials datasets. MODNet is shown to outperform current leaders on 4 of the 13 tasks, whilst closely matching the current leaders on a further 3 tasks; MODNet performs particularly well when the number of samples is below 10,000. Attention is paid to two topics of concern when benchmarking models. First, we encourage the reporting of a more diverse set of metrics as it leads to a more comprehensive and holistic comparison of model performance. Second, an equally important task is the uncertainty assessment of a model towards a target domain. By applying a distance metric in feature space, we found that significant variations in validation errors can be observed, depending on the imbalance and bias in the training set (i.e., similarity between training and application space). Both issues are often overlooked, yet important for successful real-world applications of machine learning in materials science and condensed matter.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    43
    References
    1
    Citations
    NaN
    KQI
    []