Q-Learning for Feedback Nash Strategy of Finite-Horizon Nonzero-Sum Difference Games.

2021 
In this article, we study the feedback Nash strategy of the model-free nonzero-sum difference game. The main contribution is to present the Q-learning algorithm for the linear quadratic game without prior knowledge of the system model. It is noted that the studied game is in finite horizon which is novel to the learning algorithms in the literature which are mostly for the infinite-horizon Nash strategy. The key is to characterize the Q-factors in terms of the arbitrary control input and state information. A numerical example is given to verify the effectiveness of the proposed algorithm.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []