Can Dilated Convolutions Capture Ultrasound Video Dynamics

2018 
Automated analysis of free-hand ultrasound video sweeps is an important topic in diagnostic and interventional imaging, however, it is a notoriously challenging task for detecting the standard planes, due to the low-quality data, variability in contrast, appearance and placement of the structures. Conventionally, sequential data is usually modelled with heavy Recurrent Neural Networks (RNNs). In this paper, we propose to apply a convolutional architecture (CNNs) for the standard plane detection in free-hand ultrasound videos. Our contributions are twofolds, firstly, we show a simple convolutional architecture can be applied to characterize the long range dependencies in the challenging ultrasound video sequences, and outperform the canonical LSTMs and the recently proposed two-stream spatial ConvNet by a large margin (89% versus 83% and 84% respectively). Secondly, to get an understanding of what evidences have been used by the model for decision making, we experimented with the soft-attention layers for feature pooling, and trained the entire model end-to-end with only standard classification losses. As a result, we find the input-dependent attention maps can not only boost the network’s performance, but also indicate useful patterns of the data that are deemed important for certain structure, therefore provide interpretation while deploying the models.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    17
    References
    1
    Citations
    NaN
    KQI
    []