Deep adversarial domain adaptation for breast cancer screening from mammograms.

2021 
The early detection of breast cancer greatly increases the chances that the right decision for a successful treatment plan will be made. Deep learning approaches are used in breast cancer screening and have achieved promising results when a large-scale labeled dataset is available for training. However, they may suffer from a dramatic decrease in performance when annotated data are limited. In this paper, we propose a method called deep adversarial domain adaptation (DADA) to improve the performance of breast cancer screening using mammography. Specifically, our aim is to extract the knowledge from a public dataset (source domain) and transfer the learned knowledge to improve the detection performance on the target dataset (target domain). Because of the different distributions of the source and target domains, the proposed method adopts an adversarial learning technique to perform domain adaptation using the two domains. Specifically, the adversarial procedure is trained by taking advantage of the disagreement of two classifiers. To evaluate the proposed method, the public well-labeled image-level dataset Curated Breast Imaging Subset of the Digital Database for Screening Mammography (CBIS-DDSM) is employed as the source domain. Mammography samples from the West China Hospital were collected to construct our target domain dataset, and the samples are annotated at case-level based on the corresponding pathological reports. The experimental results demonstrate the effectiveness of the proposed method compared with several other state-of-the-art automatic breast cancer screening approaches.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    55
    References
    0
    Citations
    NaN
    KQI
    []