Alternating direction implicit method

In numerical linear algebra, the Alternating Direction Implicit (ADI) method is an iterative method used to solve Sylvester matrix equations. It is a popular method for solving the large matrix equations that arise in systems theory and control, and can be formulated to construct solutions in a memory-efficient, factored form. It is also used to numerically solve parabolic and elliptic partial differential equations, and is a classic method used for modeling heat conduction and solving the diffusion equation in two or more dimensions. It is an example of an operator splitting method.1. Solve for X ( j + 1 / 2 ) {displaystyle X^{(j+1/2)}} , where ( A − β j + 1 I ) X ( j + 1 / 2 ) = X ( j ) ( B − β j + 1 I ) + C . {displaystyle left(A-eta _{j+1}I ight)X^{(j+1/2)}=X^{(j)}left(B-eta _{j+1}I ight)+C.} 2. Solve for X ( j + 1 ) {displaystyle X^{(j+1)}} , where X ( j + 1 ) ( B − α j + 1 I ) = ( A − α j + 1 I ) X ( j + 1 / 2 ) − C {displaystyle X^{(j+1)}left(B-alpha _{j+1}I ight)=left(A-alpha _{j+1}I ight)X^{(j+1/2)}-C} . In numerical linear algebra, the Alternating Direction Implicit (ADI) method is an iterative method used to solve Sylvester matrix equations. It is a popular method for solving the large matrix equations that arise in systems theory and control, and can be formulated to construct solutions in a memory-efficient, factored form. It is also used to numerically solve parabolic and elliptic partial differential equations, and is a classic method used for modeling heat conduction and solving the diffusion equation in two or more dimensions. It is an example of an operator splitting method. The ADI method is a two step iteration process that alternately updates the column and row spaces of an approximate solution to A X − X B = C {displaystyle AX-XB=C} . One ADI iteration consists of the following steps: The numbers ( α j + 1 , β j + 1 ) {displaystyle (alpha _{j+1},eta _{j+1})} are called shift parameters, and convergence depends strongly on the choice of these parameters. To perform K {displaystyle K} iterations of ADI, an initial guess X ( 0 ) {displaystyle X^{(0)}} is required, as well as K {displaystyle K} shift parameters, { ( α j , β j ) } j = 1 K {displaystyle {(alpha _{j},eta _{j})}_{j=1}^{K}} . If A ∈ C m × m {displaystyle Ain mathbb {C} ^{m imes m}} and B ∈ C n × n {displaystyle Bin mathbb {C} ^{n imes n}} , then A X − X B = C {displaystyle AX-XB=C} can be solved directly in O ( m 3 + n 3 ) {displaystyle {mathcal {O}}(m^{3}+n^{3})} using the Bartels-Stewart method. It is therefore only beneficial to use ADI when matrix-vector multiplication and linear solves involving A {displaystyle A} and B {displaystyle B} can be applied cheaply. The equation A X − X B = C {displaystyle AX-XB=C} has a unique solution if and only if σ ( A ) ∩ σ ( B ) = ∅ {displaystyle sigma (A)cap sigma (B)=emptyset } , where σ ( M ) {displaystyle sigma (M)} is the spectrum of M {displaystyle M} . However, the ADI method performs especially well when σ ( A ) {displaystyle sigma (A)} and σ ( B ) {displaystyle sigma (B)} are well-separated, and A {displaystyle A} and B {displaystyle B} are normal matrices. These assumptions are met, for example, by the Lyapunov equation A X + X A ∗ = C {displaystyle AX+XA^{*}=C} when A {displaystyle A} is positive definite. Under these assumptions, near-optimal shift parameters are known for several choices of A {displaystyle A} and B {displaystyle B} . Additionally, a priori error bounds can be computed, thereby eliminating the need to monitor the residual error in implementation. The ADI method can still be applied when the above assumptions are not met. The use of suboptimal shift parameters may adversely affect convergence, and convergence is also affected by the non-normality of A {displaystyle A} or B {displaystyle B} (sometimes advantageously). Krylov subspace methods, such as the Rational Krylov Subspace Method, are observed to typically converge more rapidly than ADI in this setting, and this has led to the development of hybrid ADI-projection methods. The problem of finding good shift parameters is nontrivial. This problem can be understood by examining the ADI error equation. After K {displaystyle K} iterations, the error is given by X − X ( K ) = ∏ j = 1 K ( A − α j I ) ( A − β j I ) ( X − X ( 0 ) ) ∏ j = 1 K ( B − β j I ) ( B − α j I ) . {displaystyle X-X^{(K)}=prod _{j=1}^{K}{frac {(A-alpha _{j}I)}{(A-eta _{j}I)}}left(X-X^{(0)} ight)prod _{j=1}^{K}{frac {(B-eta _{j}I)}{(B-alpha _{j}I)}}.} Choosing X ( 0 ) = 0 {displaystyle X^{(0)}=0} results in the following bound on the relative error:

[ "Finite difference", "Finite difference method" ]
Parent Topic
Child Topic
    No Parent Topic