A Machine Vision-Based Gestural Interface for People With Upper Extremity Physical Impairments

2014 
A machine vision-based gestural interface was developed to provide individuals with upper extremity physical impairments an alternative way to perform laboratory tasks that require physical manipulation of components. A color and depth based 3-D particle filter framework was constructed with unique descriptive features for face and hands representation. This framework was integrated into an interaction model utilizing spatial and motion information to deal efficiently with occlusions and its negative effects. More specifically, the suggested method proposed solves the false merging and false labeling problems characteristic in tracking through occlusion. The same feature encoding technique was subsequently used to detect, track and recognize users' hands. Experimental results demonstrated that the proposed approach was superior to other state-of-the-art tracking algorithms when interaction was present (97.52% accuracy). For gesture encoding, dynamic motion models were created employing the dynamic time warping method. The gestures were classified using a conditional density propagation-based trajectory recognition method. The hand trajectories were classified into different classes (commands) with a recognition accuracy of 95.9%. In addition, the new approach was validated with the “one shot learning” paradigm with comparable results to those reported in 2012. In a validation experiment, the gestures were used to control a mobile service robot and a robotic arm in a laboratory chemistry experiment. Effective control policies were selected to achieve optimal performance for the presented gestural control system through comparison of task completion time between different control modes.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    45
    References
    30
    Citations
    NaN
    KQI
    []