Automatic segmentation tool for 3D digital rocks by deep learning.

2021 
Obtaining an accurate segmentation of images obtained by computed microtomography (micro-CT) techniques is a non-trivial process due to the wide range of noise types and artifacts present in these images. Current methodologies are often time-consuming, sensitive to noise and artifacts, and require skilled people to give accurate results. Motivated by the rapid advancement of deep learning-based segmentation techniques in recent years, we have developed a tool that aims to fully automate the segmentation process in one step, without the need for any extra image processing steps such as noise filtering or artifact removal. To get a general model, we train our network using a dataset made of high-quality three-dimensional micro-CT images from different scanners, rock types, and resolutions. In addition, we use a domain-specific augmented training pipeline with various types of noise, synthetic artifacts, and image transformation/distortion. For validation, we use a synthetic dataset to measure accuracy and analyze noise/artifact sensitivity. The results show a robust and accurate segmentation performance for the most common types of noises present in real micro-CT images. We also compared the segmentation of our method and five expert users, using commercial and open software packages on real rock images. We found that most of the current tools fail to reduce the impact of local and global noises and artifacts. We quantified the variation on human-assisted segmentation results in terms of physical properties and observed a large variation. In comparison, the new method is more robust to local noises and artifacts, outperforming the human segmentation and giving consistent results. Finally, we compared the porosity of our model segmented images with experimental porosity measured in the laboratory for ten different untrained samples, finding very encouraging results.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    38
    References
    1
    Citations
    NaN
    KQI
    []