ClipUp: A Simple and Powerful Optimizer for Distribution-Based Policy Evolution.

2020 
Distribution-based search algorithms are a powerful approach for evolutionary reinforcement learning of neural network controllers. In these algorithms, gradients of the reward function with respect to the policy parameters are estimated using a population of solutions drawn from a search distribution, and then used for policy optimization with stochastic gradient ascent. A common choice is to use the Adam optimization algorithm for obtaining an adaptive behavior during gradient ascent, due to its success in a variety of supervised learning settings. As an alternative to Adam, we propose to enhance classical momentum-based gradient ascent with two simple-yet-effective techniques: gradient normalization and update clipping. We argue that the resulting optimizer called ClipUp (short for clipped updates) is a better choice for distribution-based policy evolution because its working principles are simple and easy to understand and its hyperparameters can be tuned more intuitively in practice. Moreover, it avoids the need to re-tune hyperparameters if the reward scale changes. Experiments show that ClipUp is competitive with Adam despite its simplicity and is effective at some of the most challenging continuous control benchmarks, including the Humanoid control task based on the Bullet physics simulator.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    22
    References
    1
    Citations
    NaN
    KQI
    []