Learning from Demonstration for Real-Time User Goal Prediction and Shared Assistive Control

2021 
In shared autonomy, the user input is blended with the assistive motion to accomplish a task where the user goal is typically unknown to the robot. Transparency between the human and robot is essential for effective collaboration. Prior works have provided methods for the robot to infer the user goal; however, they are usually dependent on the distance between the robot and object, which may not be directly associated with the real-time user control intention and thus cause low control feelings. Here, we propose a real-time goal prediction method driven by assistive motion generated by learning from demonstration (LfD) allowing more reactive assistive behaviors. This LfD-generated assistive motion is blended with the user input based on goal predictions to achieve targeted tasks. The LfD policy was learned offline and used with different users. To evaluate our proposed method, we compared it with a state-of-the-art Partially Observable Markov Decision Process (POMDP) based method using a distance cost, and a direct control method (i.e., joystick). A pilot study (N = 6) was conducted to control a 6-DoF Kinova Mico robotic arm to carry out three tasks: (1) reaching-and-grasping, (2) pouring, and (3) object-returning with the three control methods. We used both objective and subjective measures in the comparative study. Results show that our method has the shortest task completion time, the lowest amount of joystick control inputs among all three control methods, as well as a significantly lower angular difference between the user input and assistive motion compared to the POMDP-based method. Besides, it obtains the highest subjective score in the user preference and perceived speed ratings, and the second-highest in the control feeling and the robot did what I wanted ratings.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    24
    References
    1
    Citations
    NaN
    KQI
    []