Learning to Reach Goals via Iterated Supervised Learning.

2020 
Current reinforcement learning (RL) algorithms can be brittle and difficult to use, especially when learning goal-reaching behaviors from sparse rewards. Although supervised imitation learning provides a simple and stable alternative, it requires access to demonstrations from a human supervisor. In this paper, we study RL algorithms for learning goal reaching policies that leverage the stability of imitation learning without the need for explicit expert demonstrations. In lieu of expert demonstrations, supervision can be derived by leveraging the property that any trajectory is a successful demonstration for reaching the final state in that same trajectory. We propose a simple algorithm in which an agent continually relabels and imitates its own experience to progressively learn goal-reaching behaviors. Each iteration, the agent collects new trajectories using the latest policy, and maximizes the likelihood of the actions along these trajectories under the goal that was actually reached, so as to improve the policy. We formally link our supervised learning objective to the true RL objective, derive performance bounds, and demonstrate improved performance over current RL algorithms on goal-reaching in several benchmark tasks.
    • Correction
    • Cite
    • Save
    • Machine Reading By IdeaReader
    54
    References
    26
    Citations
    NaN
    KQI
    []