Pruning MobileNetV2 for Efficient Implementation of Minimum Variance Beamforming

2021 
Beamforming is an essential step in ultrasound image reconstruction that can alter both image quality and framerate. Adaptive methods estimate a set of data-dependent apodization weights among which Minimum Variance Beamforming (MVB) is one of the most powerful approaches performing well regardless of the imaging settings. MVB, however, is not applicable online as it is computationally expensive. Recently, in order to speed up MVB, we took advantages of state-of-the-art methods in deep learning and adapted the MobileNetV2 structure to train and test a model that mimics MVB. In terms of image quality, our method was ranked first in the Challenge on Ultrasound Beamforming with Deep Learning (CUBDL). However, considering both image quality and network size, our method was jointly ranked first with another submission which had a smaller number of parameters. The number of parameters and processing time are important especially for the point-of-care ultrasound machines, which have limited size and computational power. Herein, we propose an approach to prune the trained MobileNetV2 to reduce the number of parameters and computational complexity to further speed-up beamforming. Results confirm that there is no discernible reduction in the network performance, in terms of either visual or quantitative comparison after pruning. In terms of the memory footprint, the post-pruned networks contain 0.3 million parameters as compared to the 2.3 million pre-pruned networks, a reduction by a factor of 7.67. The run-times of MVB, pre- and post-pruned models are 4.05, 0.67 and 0.29 min, respectively.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    15
    References
    0
    Citations
    NaN
    KQI
    []