Rule-based interactive assisted reinforcement learning

2019 
Reinforcement Learning (RL) has seen increasing interest over the past few years, partially owing to breakthroughs in the digestion and application of external information. The use of external information results in improved learning speeds and solutions to more complex domains. This thesis, a collection of five key contributions, demonstrates that comparable performance gains to existing Interactive Reinforcement Learning methods can be achieved using less data, sourced during operation, and without prior verifcation and validation of the information's integrity. First, this thesis introduces Assisted Reinforcement Learning (ARL), a collective term referring to RL methods that utilise external information to leverage the learning process, and provides a non-exhaustive review of current ARL methods. Second, two advice delivery methods common in ARL, evaluative and informative, are compared through human trials. The comparison highlights how human engagement, accuracy of advice, agent performance, and advice utility differ between the two methods. Third, this thesis introduces simulated users as a methodology for testing and comparing ARL methods. Simulated users enable testing and comparing of ARL systems without costly and time-consuming human trials. While not a replacement for well-designed human trials, simulated users offer a cheap and robust approach to ARL design and comparison. Fourth, the concept of persistence is introduced to Interactive Reinforcement Learning. The retention and reuse of advice maximises utility and can lead to improved performance and reduced human demand. Finally, this thesis presents rule-based interactive RL, an iterative method for providing advice to an agent. Existing interactive RL methods rely on constant human supervision and evaluation, requiring a substantial commitment from the advice-giver. Rule-based advice can be provided proactively and be generalised over the state-space while remaining flexible enough to handle potentially inaccurate or irrelevant information. Ultimately, the thesis contributions are validated empirically and clearly show that rule-based advice signicantly reduces human guidance requirements while improving agent performance.%%%%Doctor of Pholosophy
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    109
    References
    2
    Citations
    NaN
    KQI
    []