language-icon Old Web
English
Sign In

Duality (optimization)

In mathematical optimization theory, duality or the duality principle is the principle that optimization problems may be viewed from either of two perspectives, the primal problem or the dual problem. The solution to the dual problem provides a lower bound to the solution of the primal (minimization) problem. However in general the optimal values of the primal and dual problems need not be equal. Their difference is called the duality gap. For convex optimization problems, the duality gap is zero under a constraint qualification condition. In mathematical optimization theory, duality or the duality principle is the principle that optimization problems may be viewed from either of two perspectives, the primal problem or the dual problem. The solution to the dual problem provides a lower bound to the solution of the primal (minimization) problem. However in general the optimal values of the primal and dual problems need not be equal. Their difference is called the duality gap. For convex optimization problems, the duality gap is zero under a constraint qualification condition. Usually the term 'dual problem' refers to the Lagrangian dual problem but other dual problems are used – for example, the Wolfe dual problem and the Fenchel dual problem. The Lagrangian dual problem is obtained by forming the Lagrangian of a minimization problem by using nonnegative Lagrange multipliers to add the constraints to the objective function, and then solving for the primal variable values that minimize the original objective function. This solution gives the primal variables as functions of the Lagrange multipliers, which are called dual variables, so that the new problem is to maximize the objective function with respect to the dual variables under the derived constraints on the dual variables (including at least the nonnegativity constraints). In general given two dual pairs of separated locally convex spaces ( X , X ∗ ) {displaystyle left(X,X^{*} ight)} and ( Y , Y ∗ ) {displaystyle left(Y,Y^{*} ight)} and the function f : X → R ∪ { + ∞ } {displaystyle f:X o mathbb {R} cup {+infty }} , we can define the primal problem as finding x ^ {displaystyle {hat {x}}} such that f ( x ^ ) = inf x ∈ X f ( x ) . {displaystyle f({hat {x}})=inf _{xin X}f(x).,} In other words, if x ^ {displaystyle {hat {x}}} exists, f ( x ^ ) {displaystyle f({hat {x}})} is the minimum of the function f {displaystyle f} and the infimum (greatest lower bound) of the function is attained. If there are constraint conditions, these can be built into the function f {displaystyle f} by letting f ~ = f + I c o n s t r a i n t s {displaystyle { ilde {f}}=f+I_{mathrm {constraints} }} where I c o n s t r a i n t s {displaystyle I_{mathrm {constraints} }} is a suitable function on X {displaystyle X} that has a minimum 0 on the constraints, and for which one can prove that inf x ∈ X f ~ ( x ) = inf x   c o n s t r a i n e d f ( x ) {displaystyle inf _{xin X}{ ilde {f}}(x)=inf _{x mathrm {constrained} }f(x)} . The latter condition is trivially, but not always conveniently, satisfied for the characteristic function (i.e. I c o n s t r a i n t s ( x ) = 0 {displaystyle I_{mathrm {constraints} }(x)=0} for x {displaystyle x} satisfying the constraints and I c o n s t r a i n t s ( x ) = ∞ {displaystyle I_{mathrm {constraints} }(x)=infty } otherwise). Then extend f ~ {displaystyle { ilde {f}}} to a perturbation function F : X × Y → R ∪ { + ∞ } {displaystyle F:X imes Y o mathbb {R} cup {+infty }} such that F ( x , 0 ) = f ~ ( x ) {displaystyle F(x,0)={ ilde {f}}(x)} . The duality gap is the difference of the right and left hand sides of the inequality where F ∗ {displaystyle F^{*}} is the convex conjugate in both variables and sup {displaystyle sup } denotes the supremum (least upper bound). The duality gap is the difference between the values of any primal solutions and any dual solutions. If d ∗ {displaystyle d^{*}} is the optimal dual value and p ∗ {displaystyle p^{*}} is the optimal primal value, then the duality gap is equal to p ∗ − d ∗ {displaystyle p^{*}-d^{*}} . This value is always greater than or equal to 0. The duality gap is zero if and only if strong duality holds. Otherwise the gap is strictly positive and weak duality holds. In computational optimization, another 'duality gap' is often reported, which is the difference in value between any dual solution and the value of a feasible but suboptimal iterate for the primal problem. This alternative 'duality gap' quantifies the discrepancy between the value of a current feasible but suboptimal iterate for the primal problem and the value of the dual problem; the value of the dual problem is, under regularity conditions, equal to the value of the convex relaxation of the primal problem: The convex relaxation is the problem arising replacing a non-convex feasible set with its closed convex hull and with replacing a non-convex function with its convex closure, that is the function that has the epigraph that is the closed convex hull of the original primal objective function. Linear programming problems are optimization problems in which the objective function and the constraints are all linear. In the primal problem, the objective function is a linear combination of n variables. There are m constraints, each of which places an upper bound on a linear combination of the n variables. The goal is to maximize the value of the objective function subject to the constraints. A solution is a vector (a list) of n values that achieves the maximum value for the objective function.

[ "Quantum electrodynamics", "Mathematical optimization", "Mathematical analysis", "Pure mathematics", "Duality (mathematics)", "dual transformation", "Farkas' lemma", "canonical duality theory", "Roy's identity" ]
Parent Topic
Child Topic
    No Parent Topic