Learned Dynamics Models and Online Planning for Model-Based Animation Agents.

2021 
Deep Reinforcement Learning (RL) has resulted in impressive results when applied in creating virtual character animation control agents capable of responsive behaviour. However, current state-of-the-art methods are heavily dependant on physics-driven feedback to learn character behaviours and are not transferable to portraying behaviour such as social interactions and gestures. In this paper, we present a novel approach to data-driven character animation; we introduce model-based RL animation control agents that learn character dynamics models that are applicable to a range of behaviours. Animation tasks are expressed as meta-objectives, and online planning is used to generate animation within a beta-distribution parameterised space that substantially improves agent efficiency. Purely through self-exploration and learned dynamics, agents created within our framework are able to output animations to successfully complete gaze and pointing tasks robustly while maintaining smoothness of motion, using minimal training epochs.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    12
    References
    0
    Citations
    NaN
    KQI
    []