Automated detection of surgical wounds in videos of open neck procedures using a mask R-CNN

2021 
Open surgery represents a dominant proportion of procedures performed, but has lagged behind endoscopic surgery in video-based insights due to the difficulty obtaining high-quality open surgical video. Automated detection of the open surgical wound would enhance tracking and stabilization of body-worn cameras to optimize video capture for these procedures. We present results using a mask R-CNN to identify the surgical wound (the “area of interest”, AOI) in image sets derived from 27 open neck procedures (a 2310-image training/validation set and a 1163-image testing set). Bounding box application to the surgical wound was reliable (F-1 > 0.905) in the testing sets with a <5% false positive rate (recognizing non-wound areas as the AOI). Mask application to greater than 50% of the wound area also had good success (F-1 = 0.831) under parameters set for high specificity. When applied to short video clips as proof-of-principle, the model performed well both with emerging AOI (i.e., identifying the wound as incisions were developed) and with recapture of the AOI following obstruction). Overall, we identified image lighting quality and the presence of distractors (e.g., bloody sponges) as the primary sources of model errors on visual review. These data serve as a first demonstration of open surgical wound detection using first-person video footage, and sets the stage for further work in this area.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    1
    Citations
    NaN
    KQI
    []