language-icon Old Web
English
Sign In

Ellipsoid method

In mathematical optimization, the ellipsoid method is an iterative method for minimizing convex functions. When specialized to solving feasible linear optimization problems with rational data, the ellipsoid method is an algorithm which finds an optimal solution in a finite number of steps. In mathematical optimization, the ellipsoid method is an iterative method for minimizing convex functions. When specialized to solving feasible linear optimization problems with rational data, the ellipsoid method is an algorithm which finds an optimal solution in a finite number of steps. The ellipsoid method generates a sequence of ellipsoids whose volume uniformly decreases at every step, thus enclosing a minimizer of a convex function. The ellipsoid method has a long history. As an iterative method, a preliminary version was introduced by Naum Z. Shor. In 1972, an approximation algorithm for real convex minimization was studied by Arkadi Nemirovski and David B. Yudin (Judin). As an algorithm for solving linear programming problems with rational data, the ellipsoid algorithm was studied by Leonid Khachiyan: Khachiyan's achievement was to prove the polynomial-time solvability of linear programs. Following Khachiyan's work, the ellipsoid method was the only algorithm for solving linear programs whose runtime had been proved to be polynomial until Karmarkar's algorithm. However, Karmarkar's interior-point method and variants of the simplex algorithm are much faster than the ellipsoid method in practice. Karmarkar's algorithm is also faster in the worst case. However, the ellipsoidal algorithm allows complexity theorists to achieve (worst-case) bounds that depend on the dimension of the problem and on the size of the data, but not on the number of rows, so it remained important in combinatorial optimization theory for many years. Only in the 21st century have interior-point algorithms with similar complexity properties appeared. A convex minimization problem consists of a convex function f 0 ( x ) : R n → R {displaystyle f_{0}(x):mathbb {R} ^{n} o mathbb {R} } to be minimized over the variable x, convex inequality constraints of the form f i ( x ) ⩽ 0 {displaystyle f_{i}(x)leqslant 0} , where the functions f i {displaystyle f_{i}} are convex, and linear equality constraints of the form h i ( x ) = 0 {displaystyle h_{i}(x)=0} . We are also given an initial ellipsoid E ( 0 ) ⊂ R n {displaystyle {mathcal {E}}^{(0)}subset mathbb {R} ^{n}} defined as containing a minimizer x ∗ {displaystyle x^{*}} , where P ≻ 0 {displaystyle Psucc 0} and x 0 {displaystyle x_{0}} is the center of E {displaystyle {mathcal {E}}} . Finally, we require the existence of a cutting-plane oracle for the function f. One example of a cutting-plane is given by a subgradient g of f. At the k-th iteration of the algorithm, we have a point x ( k ) {displaystyle x^{(k)}} at the center of an ellipsoid We query the cutting-plane oracle to obtain a vector g ( k + 1 ) ∈ R n {displaystyle g^{(k+1)}in mathbb {R} ^{n}} such that

[ "Convex optimization", "Linear matrix inequality", "Convex analysis", "Convex combination", "Subderivative" ]
Parent Topic
Child Topic
    No Parent Topic