Old Web
English

Dynamic programming

Dynamic programming is both a mathematical optimization method and a computer programming method. The method was developed by Richard Bellman in the 1950s and has found applications in numerous fields, from aerospace engineering to economics. In both contexts it refers to simplifying a complicated problem by breaking it down into simpler sub-problems in a recursive manner. While some decision problems cannot be taken apart this way, decisions that span several points in time do often break apart recursively. Likewise, in computer science, if a problem can be solved optimally by breaking it into sub-problems and then recursively finding the optimal solutions to the sub-problems, then it is said to have optimal substructure.Problem 2. Find the path of minimum total length between two given nodes P {displaystyle P} and Q {displaystyle Q} . Dynamic programming is both a mathematical optimization method and a computer programming method. The method was developed by Richard Bellman in the 1950s and has found applications in numerous fields, from aerospace engineering to economics. In both contexts it refers to simplifying a complicated problem by breaking it down into simpler sub-problems in a recursive manner. While some decision problems cannot be taken apart this way, decisions that span several points in time do often break apart recursively. Likewise, in computer science, if a problem can be solved optimally by breaking it into sub-problems and then recursively finding the optimal solutions to the sub-problems, then it is said to have optimal substructure. If sub-problems can be nested recursively inside larger problems, so that dynamic programming methods are applicable, then there is a relation between the value of the larger problem and the values of the sub-problems. In the optimization literature this relationship is called the Bellman equation. In terms of mathematical optimization, dynamic programming usually refers to simplifying a decision by breaking it down into a sequence of decision steps over time. This is done by defining a sequence of value functions V1, V2, ..., Vn taking y as an argument representing the state of the system at times i from 1 to n. The definition of Vn(y) is the value obtained in state y at the last time n. The values Vi at earlier times i = n −1, n − 2, ..., 2, 1 can be found by working backwards, using a recursive relationship called the Bellman equation. For i = 2, ..., n, Vi−1 at any state y is calculated from Vi by maximizing a simple function (usually the sum) of the gain from a decision at time i − 1 and the function Vi at the new state of the system if this decision is made. Since Vi has already been calculated for the needed states, the above operation yields Vi−1 for those states. Finally, V1 at the initial state of the system is the value of the optimal solution. The optimal values of the decision variables can be recovered, one by one, by tracking back the calculations already performed. In control theory, a typical problem is to find an admissible control u ∗ {displaystyle mathbf {u} ^{ast }} which causes the system x ˙ ( t ) = g ( x ( t ) , u ( t ) , t ) {displaystyle {dot {mathbf {x} }}(t)=mathbf {g} left(mathbf {x} (t),mathbf {u} (t),t ight)} to follow an admissible trajectory x ∗ {displaystyle mathbf {x} ^{ast }} on a continuous time interval t 0 ≤ t ≤ t 1 {displaystyle t_{0}leq tleq t_{1}} that minimizes a cost function The solution to this problem is an optimal control law or policy u ∗ = h ( x ( t ) , t ) {displaystyle mathbf {u} ^{ast }=h(mathbf {x} (t),t)} , which produces an optimal trajectory x ∗ {displaystyle mathbf {x} ^{ast }} and an optimized loss function J ∗ {displaystyle J^{ast }} . The latter obeys the fundamental equation of dynamic programming: a partial differential equation known as the Hamilton–Jacobi–Bellman equation, in which J x ∗ = ∂ J ∗ ∂ x = [ ∂ J ∗ ∂ x 1         ∂ J ∗ ∂ x 2         …         ∂ J ∗ ∂ x n ] T {displaystyle J_{x}^{ast }={frac {partial J^{ast }}{partial mathbf {x} }}=left^{mathsf {T}}} and J t ∗ = ∂ J ∗ ∂ t {displaystyle J_{t}^{ast }={frac {partial J^{ast }}{partial t}}} . One finds the minimizing u {displaystyle mathbf {u} } in terms of t {displaystyle t} , x {displaystyle mathbf {x} } , and the unknown function J x ∗ {displaystyle J_{x}^{ast }} and then substitutes the result into the Hamilton–Jacobi–Bellman equation to get the partial differential equation to be solved with boundary condition J ( t 1 ) = b ( x ( t 1 ) , t 1 ) {displaystyle Jleft(t_{1} ight)=bleft(mathbf {x} (t_{1}),t_{1} ight)} . In practice, this generally requires numerical techniques for some discrete approximation to the exact optimization relationship.

Child Topic
No Parent Topic