A Deep Learning Approach to Segment and Classify C-Shaped Canal Morphologies in Mandibular Second Molars Utilizing Cone-Beam Computed Tomography.

2021 
INTRODUCTION The identification of C-shaped root-canal anatomy on radiographic images affects clinical decision-making and treatment. Aims of this study were to develop a Deep Learning (DL) model to classify C-shaped canal anatomy in mandibular second molars from Cone Beam CT (CBCT) volumes and to compare the performance of three different architectures. METHODS U-Net, Residual U-Net, and Xception U-Net architectures were used for image segmentation and classification of C-shape anatomies. Model training and validation was performed on 100 of a total of 135 available limited field of view CBCTs containing mandibular molars with C-shape anatomy. 35 CBCTs were used for testing. Voxel-matching accuracy of the automated labeling of the C-Shape anatomy was assessed with the DICE index. Mean sensitivity of predicting the correct C-shape subcategory was calculated based on detection accuracy. One-way ANOVA and Post-Hoc Tukey HSD tests were used for statistical evaluation. RESULTS Mean DICE coefficients were 0.768±0.0349 for Xception U-Net, 0.736±0.0297 for Residual U-Net, and 0.660±0.0354 for U-Net on the test dataset. The performance of the three models was significantly different overall (ANOVA;P=.000779). Both Xception U-Net (Q=7.23;P=0.00070) and Residual U-Net (Q=5.09;P=0.00951) performed significantly better than U-Net (Post-Hoc Tukey HSD). Mean sensitivity values were 0.786±0.0378 for Xception U-Net, 0.746±0.0391 for Residual U-Net and 0.720±0.0495 for U-Net. Mean Positive Predictive Values (PPV) were 77.6%±0.1998 for U-Net, 78.2%±0.0.1971 for Residual U-Net, and 80.0%±0.1098 for Xception U-Net. Addition of contrast limited adaptive histogram equalization (CLAHE) had improved overall architecture efficacy by mean 4.6% (P<0.0001). CONCLUSIONS DL may aid in the detection and classification of C-shaped canal anatomy.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    46
    References
    1
    Citations
    NaN
    KQI
    []