On Oracle-Efficient PAC RL With Rich Observations

Authors:
Christoph Dann Carnegie Mellon University
Nan Jiang University of Illinois at Urbana-Champaign
Akshay Krishnamurthy Microsoft
Alekh Agarwal Microsoft Research
John Langford Microsoft Research New York
Robert Schapire MIcrosoft Research

Introduction:

The authors study the computational tractability of PAC reinforcement learning with rich observations.

Abstract:

We study the computational tractability of PAC reinforcement learning with rich observations. We present new provably sample-efficient algorithms for environments with deterministic hidden state dynamics and stochastic rich observations. These methods operate in an oracle model of computation -- accessing policy and value function classes exclusively through standard optimization primitives -- and therefore represent computationally efficient alternatives to prior algorithms that require enumeration. With stochastic hidden state dynamics, we prove that the only known sample-efficient algorithm, OLIVE, cannot be implemented in the oracle model. We also present several examples that illustrate fundamental challenges of tractable PAC reinforcement learning in such general settings.

You may want to know: