Learning Q-Function Approximations for Hybrid Control Problems

2022 
The main challenge in controlling hybrid systems arises from having to consider an exponential number of sequences of future modes to make good long-term decisions. Model predictive control (MPC) computes a control action through a finite-horizon optimisation problem. A key ingredient in this problem is a terminal cost, to account for the system’s evolution beyond the chosen horizon. A good terminal cost can reduce the horizon length required for good control action and is often tuned empirically by observing performance. We build on the idea of using $N$ -step $Q$ -functions ( $\mathcal {Q}^{(N{)}}$ ) in the MPC objective to avoid having to choose a terminal cost. We present a formulation incorporating the system dynamics and constraints to approximate the optimal $\mathcal {Q}^{(N{)}}$ -function and algorithms to train the approximation parameters through an exploration of the state space. We test the control policy derived from the trained approximations on two benchmark problems through simulations and observe that our algorithms are able to learn good $\mathcal {Q}^{(N{)}}$ -approximations for hybrid systems with dimensions of practical relevance based on a relatively small data-set. We compare our controller’s performance against that of Hybrid MPC in terms of computation time and closed-loop costs.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    14
    References
    0
    Citations
    NaN
    KQI
    []