Connection among stochastic Hamilton-Jacobi-Bellman equation, path-integral, and Koopman operator on nonlinear stochastic optimal control
2020
This paper gives a new insight into nonlinear stochastic optimal control problems from the perspective of Koopman operators. The Koopman operator is a linear map from functions to functions, which stems from the original system dynamics. Although the Koopman operator is infinite-dimensional, only a specific type of observable is enough to be focused on in the control problem. This fact becomes easier to understand via the path-integral control. Furthermore, the focus on the specific observable leads to a natural power-series expansion; coupled ordinary differential equations for discrete-state space systems are derived. The connection among the stochastic Hamilton-Jacobi-Bellman equation, the path-integral control, and the Koopman operator is clarified, and a demonstration for nonlinear stochastic optimal control shows the derived equations work well.
Keywords:
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
2
References
0
Citations
NaN
KQI