Ultrasound deep beamforming using a multiconstrained hybrid generative adversarial network.

2021 
Abstract Ultrasound beamforming is a principal factor in high-quality ultrasound imaging. The conventional delay-and-sum (DAS) beamformer generates images with high computational speed but low spatial resolution; thus, many adaptive beamforming methods have been introduced to improve image qualities. However, these adaptive beamforming methods suffer from high computational complexity, which limits their practical applications. Hence, an advanced beamformer that can overcome spatiotemporal resolution bottlenecks is eagerly awaited. In this paper, we propose a novel deep-learning-based algorithm, called the multiconstrained hybrid generative adversarial network (MC-HGAN) beamformer that rapidly achieves high-quality ultrasound imaging. The MC-HGAN beamformer directly establishes a one-shot mapping between the radio frequency signals and the reconstructed ultrasound images through a hybrid generative adversarial network (GAN) model. Through two specific branches, the hybrid GAN model extracts both radio frequency-based and image-based features and integrates them through a fusion module. We also introduce a multiconstrained training strategy to provide comprehensive guidance for the network by invoking intermediates to co-constrain the training process. Moreover, our beamformer is designed to adapt to various ultrasonic emission modes, which improves its generalizability for clinical applications. We conducted experiments on a variety of datasets scanned by line-scan and plane wave emission modes and evaluated the results with both similarity-based and ultrasound-specific metrics. The comparisons demonstrate that the MC-HGAN beamformer generates ultrasound images whose quality is higher than that of images generated by other deep learning-based methods and shows very high robustness in different clinical datasets. This technology also shows great potential in real-time imaging.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    42
    References
    2
    Citations
    NaN
    KQI
    []