Teaching robots to do object assembly using multi-modal 3D vision

2017 
The motivation of this paper is to develop an intelligent robot assembly system using multi-modal vision for next-generation industrial assembly. The system includes two phases where in the first phase human beings demonstrate assembly to robots and in the second phase robots detect objects, plan grasps, and assemble objects following human demonstration using AI searching. A notorious difficulty to implement such a system is the bad precision of 3D visual detection. This paper presents multi-modal approaches to overcome the difficulty: It uses AR markers in the teaching phase to detect human operation, and uses point clouds and geometric constraints in the robot execution phase to avoid unexpected occlusion and noises. The paper presents several experiments to examine the precision and correctness of the approaches. It demonstrates the applicability of the approaches by integrating them with graph model-based motion planning, and by executing the results on industrial robots in real-world scenarios.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []