Model-Based Noise Reduction in Scatter Correction Using a Deep Convolutional Neural Network for Radiography

2019 
In radiography, scattered X-rays cause contrast loss in X-ray images, limiting their clinical usefulness, and are often reduced using several scatter correction methods. One difficulty associated with the existing scatter correction methods is noise amplification due to scatter correction. In this study, we investigated a model-based noise reduction method using the U-Net, which is a deep convolutional neural network proposed for image segmentation, to provide a practical solution to the noise amplification problem in conventional scatter correction methods. In this method, the noise properties of an X-ray image after scatter correction are first analyzed using a Poisson-Gaussian mixture model, a trained U-Net model in which the corresponding noise parameters is used predicts image noise, and the scatter-corrected image is finally recovered by subtracting the predicted image noise. We performed a systematic simulation and an experiment to demonstrate its viability and investigated the image characteristics in terms of several image metrics. Our image results showed that the degradation of the image characteristics by scattered X-rays and noise was effectively removed by using the proposed method.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    15
    References
    0
    Citations
    NaN
    KQI
    []