Deep Learning-based Multimodal Control Interface for Human-Robot Collaboration

2018 
Abstract In human-robot collaborative manufacturing, industrial robot is required to dynamically change its pre-programmed tasks and collaborate with human operators at the same workstation. However, traditional industrial robot is controlled by pre-programmed control codes, which cannot support the emerging needs of human-robot collaboration. In response to the request, this research explored a deep learning-based multimodal robot control interface for human-robot collaboration. Three methods were integrated into the multimodal interface, including voice recognition, hand motion recognition, and body posture recognition. Deep learning was adopted as the algorithm for classification and recognition. Human-robot collaboration specific datasets were collected to support the deep learning algorithm. The result presented at the end of the paper shows the potential to adopt deep learning in human-robot collaboration systems.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    15
    References
    19
    Citations
    NaN
    KQI
    []