Multicamera 3D Reconstruction of Dynamic Surgical Cavities: Autonomous Optimal Camera Viewpoint Adjustment

2020 
In robot-assisted minimally invasive surgery (RMIS), small keyhole incisions are made in the inflated patient’s abdomen. Various robotic surgical tools and laparoscopic optical sensors can then be inserted through these incisions via trocars. Subsequently, real-time vision information from human-positioned laparoscopes informs surgeon teleoperation of the surgical robot system. This RMIS architecture affords several salient benefits, including improved surgical tool dexterity, reduced patient recovery time and lower risk of infection [1]. However, even with experienced human experts in the loop, poor situational awareness due to limited visual and haptic feedback can deteriorate performance. Recent medical robotic research has implications with regard to improving intelligence in RMIS by including augmentations and levels of task autonomy. Towards that end, telesurgical visual perception must be addressed. Since manual camera positioning in robotic minimally invasive surgery is suboptimal and error-prone [2], the authors are interested instead in autonomous solutions. Unlike other tool-tracking focused autonomous camera positioning research [3]–[5], this work presents a novel context-aware autonomous multicamera viewpoint adjustment pipeline from the perspective of simultaneously maintaining the surgical tool within view and providing better point coverage for real-time 3D surgical cavity reconstruction.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    48
    References
    4
    Citations
    NaN
    KQI
    []