Historical Origins for the Overestimation of Mammographic Sensitivity.

2021 
The sensitivity of screening mammography for the early detection of breast cancer has improved over the years due to advances in technology. However, guidelines for screening mammography are often based on the mortality reductions demonstrated in the historic trials, where sensitivity with the first-generation mammography was relatively low. With attempts to establish risk:benefit ratios for population screening, it is important to understand the wide range of sensitivities that have been reported for mammography.  Original calculations for mammographic sensitivity were often based on studies that included palpable tumors, thus generating inflated numbers not fully applicable to non-palpable tumors. If restricted to asymptomatic screening, sensitivity calculations were often based on the inverse of interval cancers, a relatively inaccurate method since breast cancers missed on mammography can remain undetected clinically for several years. It was not until multi-modality imaging was developed, primarily ultrasound and MRI, where sensitivity determinations could be made in real time by cross-checking outcomes with each modality. From this, it became apparent that there was a strong correlation between breast density levels and sensitivity levels, such that a single number to denote mammographic sensitivity was disingenuous. The increasing awareness that mortality reductions in the historic trials were achieved with a low sensitivity tool has prompted great interest in additional technologic improvements in mammography, as well as multi-modality imaging approaches for women with high density and/or high risk. In order to appreciate the potential benefit of these new approaches, it is helpful to understand the historical basis behind overestimating the sensitivity of screening mammography.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    16
    References
    0
    Citations
    NaN
    KQI
    []