RLCFR: Minimize Counterfactual Regret by Deep Reinforcement Learning

2022 
Abstract Counterfactual regret minimization (CFR) is a popular method to deal with decision-making problems of two-player zero-sum games with imperfect information. Unlike previous studies that mostly explored solving large-scale problems or accelerating the solution efficiency, we propose a framework, RLCFR, which aims at improving the generalization ability of the CFR method. In RLCFR, the game strategy is solved by CFR-based methods in a reinforcement learning (RL) framework. The dynamic procedure of the iterative interactive strategy updating is modeled as a Markov decision process (MDP). Our method then learns a policy to select the appropriate method of regret updating in the iteration process. In addition, a stepwise reward function is formulated to learn the action policy, which is proportional to how well the iteration strategy performs at each step. Extensive experimental results on various games showed that the generalization ability of our method is significantly improved compared with existing state-of-the-art methods.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    44
    References
    0
    Citations
    NaN
    KQI
    []