Interpreting neural network models of residual scalar flux.

2020 
We show that in addition to providing effective and competitive closures, when analysed in terms of dynamics and physically-relevant diagnostics, artificial neural networks (ANNs) will be both interpretable and provide useful insights in the on-going task of developing and improving turbulence closures. In the context of large-eddy simulations (LES) of a passive scalar in homogeneous isotropic turbulence, exact subfilter fluxes obtained by filtering direct numerical simulations (DNS) are used both to train deep ANN models as a function of filtered variables, and to optimise the coefficients of common spatio-temporally local LES closures. \textit{A-priori} analysis of the subfilter scalar variance transfer rate demonstrates that learnt ANN models out-perform optimised turbulent Prandtl number closures and non-linear gradient models. Next, \textit{a-posteriori} solutions are obtained with each model over several integral timescales. These experiments reveal, with single- and multi-point diagnostics, that ANN models temporally track exact resolved scalar variance with greater accuracy compared to other subfilter flux models for a given filter length scale. Finally, we interpret the artificial neural networks statistically with differential sensitivity analysis to show that the ANN models learn dynamics reminiscent of so-called "mixed models", where mixed models are understood as comprising both a structural and functional component. Besides enabling enhanced-accuracy LES of passive scalars henceforth, we anticipate this work to contribute to utilising neural network models as a tool in interpretability, robustness and model discovery.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    45
    References
    3
    Citations
    NaN
    KQI
    []