iTP-LfD: Improved task parametrised learning from demonstration for adaptive path generation of cobot

2021 
Abstract An approach of Task-Parameterised Learning from Demonstration (TP-LfD) aims at automatically adapting the movements of collaborative robots (cobots) to new settings using knowledge learnt from demonstrated paths. The approach is suitable for encoding complex relations between a cobot and its surrounding, i.e., task-relevant objects. However, further efforts are still required to enhance the intelligence and adaptability of TP-LfD for dynamic tasks. With this aim, this paper presents an improved TP-LfD (iTP-LfD) approach to program cobots adaptively for a variety of industrial tasks. iTP-LfD comprises of three main improvements over other developed TP-LfD approaches: 1) detecting generic visual features for frames of reference (frames) in demonstrations for path reproduction in new settings without using complex computer vision algorithms, 2) minimising redundant frames that belong to the same object in demonstrations using a statistical algorithm, and 3) designing a reinforcement learning algorithm to eliminate irrelevant frames. The distinguishing characteristic of the iTP-LfD approach is that optimal frames are identified from demonstrations by simplifying computational complexity, overcoming occlusions in new settings, and boosting the overall performance. Case studies for a variety of industrial tasks involving different objects and scenarios highlight the adaptability and robustness of the iTP-LfD approach.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    29
    References
    5
    Citations
    NaN
    KQI
    []