language-icon Old Web
English
Sign In

Hierarchical matrix

In numerical mathematics, hierarchical matrices (H-matrices)are used as data-sparse approximations of non-sparse matrices.While a sparse matrix of dimension n {displaystyle n} can be represented efficiently in O ( n ) {displaystyle O(n)} units of storageby storing only its non-zero entries, a non-sparse matrix would require O ( n 2 ) {displaystyle O(n^{2})} units of storage, and using this typeof matrices for large problems would therefore be prohibitively expensive in terms of storage and computing time.Hierarchical matrices provide an approximation requiring only O ( n k log ⁡ ( n ) ) {displaystyle O(nk,log(n))} units of storage, where k {displaystyle k} is aparameter controlling the accuracy of the approximation.In typical applications, e.g., when discretizing integral equations,preconditioning the resulting systems of linear equations,or solving elliptic partial differential equations,a rank proportional to log ⁡ ( 1 / ϵ ) γ {displaystyle log(1/epsilon )^{gamma }} with a small constant γ {displaystyle gamma } is sufficient to ensure anaccuracy of ϵ {displaystyle epsilon } .Compared to many other data-sparse representations of non-sparse matrices, hierarchical matrices offer a major advantage:the results of matrix arithmetic operations like matrix multiplication, factorization or inversion can be approximatedin O ( n k α log ⁡ ( n ) β ) {displaystyle O(nk^{alpha },log(n)^{eta })} operations, where α , β ∈ { 1 , 2 , 3 } . {displaystyle alpha ,eta in {1,2,3}.} In numerical mathematics, hierarchical matrices (H-matrices)are used as data-sparse approximations of non-sparse matrices.While a sparse matrix of dimension n {displaystyle n} can be represented efficiently in O ( n ) {displaystyle O(n)} units of storageby storing only its non-zero entries, a non-sparse matrix would require O ( n 2 ) {displaystyle O(n^{2})} units of storage, and using this typeof matrices for large problems would therefore be prohibitively expensive in terms of storage and computing time.Hierarchical matrices provide an approximation requiring only O ( n k log ⁡ ( n ) ) {displaystyle O(nk,log(n))} units of storage, where k {displaystyle k} is aparameter controlling the accuracy of the approximation.In typical applications, e.g., when discretizing integral equations,preconditioning the resulting systems of linear equations,or solving elliptic partial differential equations,a rank proportional to log ⁡ ( 1 / ϵ ) γ {displaystyle log(1/epsilon )^{gamma }} with a small constant γ {displaystyle gamma } is sufficient to ensure anaccuracy of ϵ {displaystyle epsilon } .Compared to many other data-sparse representations of non-sparse matrices, hierarchical matrices offer a major advantage:the results of matrix arithmetic operations like matrix multiplication, factorization or inversion can be approximatedin O ( n k α log ⁡ ( n ) β ) {displaystyle O(nk^{alpha },log(n)^{eta })} operations, where α , β ∈ { 1 , 2 , 3 } . {displaystyle alpha ,eta in {1,2,3}.} Hierarchical matrices rely on local low-rank approximations:let I , J {displaystyle I,J} be index sets, and let G ∈ R I × J {displaystyle Gin {mathbb {R} }^{I imes J}} denote the matrix we have to approximate.In many applications (see above), we can find subsets t ⊆ I , s ⊆ J {displaystyle tsubseteq I,ssubseteq J} such that G | t × s {displaystyle G|_{t imes s}} can be approximated by a rank- k {displaystyle k} matrix.This approximation can be represented in factorized form G | t × s ≈ A B ∗ {displaystyle G|_{t imes s}approx AB^{*}} with factors A ∈ R t × k , B ∈ R s × k {displaystyle Ain {mathbb {R} }^{t imes k},Bin {mathbb {R} }^{s imes k}} .While the standard representation of the matrix G | t × s {displaystyle G|_{t imes s}} requires O ( ( # t ) ( # s ) ) {displaystyle O((#t)(#s))} units of storage,the factorized representation requires only O ( k ( # t + # s ) ) {displaystyle O(k(#t+#s))} units.If k {displaystyle k} is not too large, the storage requirements are reduced significantly. In order to approximate the entire matrix G {displaystyle G} , it is split into a family of submatrices.Large submatrices are stored in factorized representation, while small submatrices are stored in standard representationin order to improve the efficiency. Low-rank matrices are closely related to degenerate expansions used in panel clustering and the fast multipole methodto approximate integral operators.In this sense, hierarchical matrices can be considered the algebraic counterparts of these techniques. Hierarchical matrices are successfully used to treat integral equations, e.g., the single and double layer potential operatorsappearing in the boundary element method.A typical operator has the form The Galerkin method leads to matrix entries of the form where ( φ i ) i ∈ I {displaystyle (varphi _{i})_{iin I}} and ( ψ j ) j ∈ J {displaystyle (psi _{j})_{jin J}} are families of finite element basis functions.If the kernel function κ {displaystyle kappa } is sufficiently smooth, we can approximate it by polynomial interpolation to obtain where ( ξ ν ) ν = 1 k {displaystyle (xi _{ u })_{ u =1}^{k}} is the family of interpolation points and ( ℓ ν ) ν = 1 k {displaystyle (ell _{ u })_{ u =1}^{k}} is the corresponding family of Lagrange polynomials.Replacing κ {displaystyle kappa } by κ ~ {displaystyle { ilde {kappa }}} yields an approximation

[ "Sparse matrix", "Matrix (mathematics)" ]
Parent Topic
Child Topic
    No Parent Topic