Make It Easier: An Empirical Simplification of a Deep 3D Segmentation Network for Human Body Parts.

2021 
Nowadays, computer vision is bringing benefits in different scenarios, such as home robotics, autonomous driving and healthcare. The latter is the main application scenario of this work. In this paper, we propose a simplified implementation of a state-of-the-art 3D semantic segmentation deep convolutional network used for automatizing the synthesis of orthopedic casts starting from 3D scans of patients’ arms. The proposed network, based on the PointNet deep learning architecture, is capable of recognising and discriminating among several regions of interest on the scan of the patient’s arm, like the regions around the thumb, the wrist and the elbow. Based on such segmented regions it is then possible to extract important measurements and features to synthesize a custom 3D printed cast. The aforementioned task is very specific and difficult to address with standard 3D segmentation algorithms, moreover it requires very specialized human intervention in data collection and preparation. Until now, semantic regions in human body parts are typically manually annotated by experts to ensure the required accuracy. Unfortunately, this process is time-consuming and, possibly, it may limit the amount of data available for data-driven approaches. In this work, we also investigate the usage of data augmentation to deal with such limited datasets and analyze the model performance by means of cross-validation, which highlights how the proposed architecture model can successfully, and with high accuracy, predict the regions of interest. This is an inspiring result for further research on deep models’ adaptation to challenging applications, for which often clear and consistent data collections are not immediately available. Thus, an empirical approach based on pruning the network parameters and layers, together with a consistent data augmentation technique, could be really effective and prove to be the winning approach.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    22
    References
    0
    Citations
    NaN
    KQI
    []