Self-learning Visual Servoing for Robot Manipulation in Unstructured Environments.

2021 
Current visual servoing methods used in robot manipulation require system modeling and parameters, only working in structured environments. This paper presents a self-learning visual servoing for a robot manipulator operated in unstructured environments. A Gaussian-mapping likelihood process is used in Bayesian stochastic state estimation (SSE) for Robotic coordination control, in which the Monte Carlo sequential importance sampling (MCSIS) algorithm is created for robotic visual-motor mapping estimation. The Bayesian learning strategy described takes advantage of restraining the particles deterioration to maintain the robot robust performance. Additionally, the servoing controller is deduced for robotic coordination directly by visual observation. The proposed visual servoing framework is applied to a manipulator with eye-in-hand configuration no system parameters. Finally, the simulation and experimental results demonstrate consistently that the proposed algorithm outperforms traditional visual servoing approaches.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    17
    References
    0
    Citations
    NaN
    KQI
    []