Reinforcement Learning-based Hierarchical Control for Path Following of a Salamander-like Robot

2020 
Path following is a challenging task for legged robots. In this paper, we present a hierarchical control architecture for path following of a quadruped salamander-like robot, in which, the tracking problem is decomposed into two sub-tasks: high-level policy learning based on the framework of reinforcement learning (RL) and low-level traditional controller design. More specifically, the high-level policy is learned in a physics simulator with a low-level controller designed in advance. To improve the tracking accuracy and to eliminate static errors, a soft Actor-Critic algorithm with state integral compensation is proposed. Additionally, to enhance the generalization and transferability, a compact state representation, which only contains the information of the target path and the abstract action similar to front-back and left-right, is proposed. The proposed algorithm is trained offline in the simulation environment and tested on the self-developed real quadruped salamander-like robot for different path following tasks. Simulation and experiments results validate the satisfactory performance of the proposed method.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    19
    References
    0
    Citations
    NaN
    KQI
    []