Safe Learning-based Model Predictive Control under State- and Input-dependent Uncertainty using Scenario Trees

2020 
The complex and uncertain dynamics of emerging systems pose several unique challenges that need to be overcome in order to design high-performance controllers. A key challenge is that safety is often achieved at the expense of closed-loop performance. This is particularly important when the uncertainty description is provided in the form of a bounded set that is estimated offline from limited data. Replacing this bounded set with a learned state- and input-dependent uncertainty enables representing the the variation of uncertainty in the model throughout the state space, thus improving closed-loop performance. Gaussian process (GP) models are a good candidate for learning such a representation; however, they produce a nonlinear and nonconvex description of the uncertainty set that is difficult to incorporate into currently available robust model predictive control (MPC) frameworks. In this work, we present a learning- and scenario-based MPC (L-sMPC) strategy that systematically accounts for feedback in the prediction using a state- and input-dependent scenario tree computed from a GP uncertainty model. To ensure that the closed-loop system evolution remains safe, we also propose a projection-based safety certification scheme that ensures the control inputs keep the system within an appropriately defined invariant set. The advantages of the proposed L-sMPC method in terms of improved performance and an enlarged feasible region are illustrated on a benchmark double integrator problem.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    30
    References
    4
    Citations
    NaN
    KQI
    []