Generating high resolution digital mammogram from digitized film mammogram with conditional generative adversarial network

2020 
Deep-learning based application for digital mammography screening is limited due to lack of labeled data. Generating digital mammogram (DM) from existing labeled digitized screen-film mammogram (DFM) dataset is one approach that may alleviate the problem. Generating high resolution DMs from DFMs is a challenge due to the limitations of network capacity and lack of GPU memory. In this study, we developed a deep learning framework, Cycle-HDDM, with which high resolution DMs were generated from DFMs. Our Cycle-HDDM model first used a sliding window to crop DFMs and DMs into patches of 256 by 256 in size. Then, we divided the patches into three categories (breast, background and boundary) using breast masks. We paired patches from the DFM and DMdatasets for training with the constraint that these paired patches should be sampled from the same category of the two different image sets. We used U-Net as the generators and modified the discriminators so that the outputs of the discriminators were a two-channel image, one channel for distinguishing real and synthesized DMs, and the other for representing a probability map for breast mask. We designed a study to evaluate the usefulness of Cycle-HDDM in a segmentation task, the objective of which was to estimate the percentage of breast density (PD) on DMs using deep neural network (DNN). With IRB approval, 1651 DFMs and 813 DMs were collected. Both DFMs and DMs were normalized to a pixel size of 100μm × 100μm for the experiments. The results show that the synthesized DMs by Cycle-HDDM could significantly improve (p < 0.001) the DNN-based mammographic density segmentation.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    2
    Citations
    NaN
    KQI
    []