Interpretable Machine Learning for Function Approximation in Structural Health Monitoring

2022 
Machine learning may complement physics-based methods for structural health monitoring (SHM), providing higher accuracy, among other benefits. However, many resulting systems are opaque, making them neither interpretable nor trustworthy. Interpretable machine learning (IML) is an active new direction intended to match algorithm accuracy with transparency, enabling users to understand their systems. This chapter overviews existing IML work and philosophy, and discusses candidates from SHM to exemplify and substantiate IML. Multidisciplinary research has been making strides toward providing end users of shallow sigmoidal artificial neural networks (ANNs) with the tools and knowledge for engineering these systems. Notoriously opaque ANNs are made transparent as linear-in-the-weight parameterization tools by using domain knowledge to determine appropriate basis functions. With a small number of hidden nodes to activate these basis functions, the modeling capability of sigmoidal ANNs is systematically revealed without relying on training. The novelty is in ANN initialization theory and practical procedures that can be interpreted via domain knowledge. A rich repository of direct (non-iterative) techniques and reusable ANN prototypes can then be aggregated as the basis functions needed for specific problems, leading to interpretable ANNs as well as improved training performance and generalization as validated by simulated and real-world data.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    47
    References
    0
    Citations
    NaN
    KQI
    []