Tuning machine learning dropout for subsurface uncertainty model accuracy

2021 
Abstract Machine learning hyperparameter tuning generally relies on minimizing the prediction error and maximizing the accuracy at withheld testing data locations. However, for many subsurface applications, prediction accuracy is not sufficient, and we must consider the goodness, accuracy, and precision, of the entire uncertainty model. Bayesian deep neural networks provide a robust framework to model uncertainty. But at a high computational cost and difficulty to scale in high-dimensional parameter spaces. Dropout offers a potential alternative for calculating prediction model realizations to represent the uncertainty model. We propose a method to compare the entire uncertainty model based on the prediction realizations and the addition of a single objective function, known as uncertainty model goodness, to the loss function to integrate model uncertainty for improved tuning of the dropout frequency in deep learning models. We test our method with a subsurface flow surrogate model based on an image-to-image regression problem. Implementing the proposed method with the uncertainty model goodness results in accurate and precise uncertainty models using the tuned dropout hyperparameter. We test the robustness and generalization of the proposed method with out-of-distribution testing data.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    55
    References
    3
    Citations
    NaN
    KQI
    []