language-icon Old Web
English
Sign In

Sparse approximation

Sparse Approximation (also known as Sparse Representation) theory deals with sparse solutions for systems of linear equations. Techniques for finding these solutions and exploiting them in applications have found wide use in image processing, signal processing, machine learning, medical imaging, and more. Sparse Approximation (also known as Sparse Representation) theory deals with sparse solutions for systems of linear equations. Techniques for finding these solutions and exploiting them in applications have found wide use in image processing, signal processing, machine learning, medical imaging, and more. Consider a linear system of equations x = D α {displaystyle x=Dalpha } , where D {displaystyle D} is an underdetermined m × p {displaystyle m imes p} matrix ( m < p ) {displaystyle (m<p)} and x ∈ R m , α ∈ R p {displaystyle xin mathbb {R} ^{m},alpha in mathbb {R} ^{p}} . The matrix D {displaystyle D} (typically assumed to be full-rank) is referred to as the dictionary, and x {displaystyle x} is a signal of interest. The core sparse representation problem is defined as the quest for the sparsest possible representation α {displaystyle alpha } satisfying x = D α {displaystyle x=Dalpha } . Due to the underdetermined nature of D {displaystyle D} , this linear system admits in general infinitely many possible solutions, and among these we seek the one with the fewest non-zeros. Put formally, we solve where ‖ α ‖ 0 = # { i : α i ≠ 0 , i = 1 , … , p } {displaystyle |alpha |_{0}=#{i:alpha _{i} eq 0,,i=1,ldots ,p}} is the ℓ 0 {displaystyle ell _{0}} pseudo-norm, which counts the number of non-zero components of α {displaystyle alpha } . This problem is known to be NP-Hard with a reduction to NP-complete subset selection problems in combinatorial optimization. Sparsity of α {displaystyle alpha } implies that only a few ( k ≪ m < p {displaystyle kll m<p} ) components in it are non-zero. The underlying motivation for such a sparse decomposition is the desire to provide the simplest possible explanation of x {displaystyle x} as a linear combination of as few as possible columns from D {displaystyle D} , also referred to as atoms. As such, the signal x {displaystyle x} can be viewed as a molecule composed of a few fundamental elements taken from D {displaystyle D} . While the above posed problem is indeed NP-Hard, its solution can often be found using approximation algorithms. One such option is a convex relaxation of the problem, obtained by using the ℓ 1 {displaystyle ell _{1}} -norm instead of ℓ 0 {displaystyle ell _{0}} , where ‖ α ‖ 1 {displaystyle |alpha |_{1}} simply sums the absolute values of the entries in α {displaystyle alpha } . This is known as the Basis Pursuit (BP) algorithm, which can be handled using any linear programming solver. An alternative approximation method is a greedy technique, such as the Matching Pursuit (MP), which finds the location of the non-zeros one at a time. Surprisingly, under mild conditions on D {displaystyle D} (using the Spark, the Mutual Coherence or the Restricted Isometry Property) and the level of sparsity in the solution, k {displaystyle k} , the sparse representation problem can be shown to have a unique solution, and BP and MP are guaranteed to find it perfectly. Often the observed signal x {displaystyle x} is noisy. By relaxing the equality constraint and imposing an ℓ 2 {displaystyle ell _{2}} -norm on the data-fitting term, the sparse decomposition problem becomes

[ "Algorithm", "Machine learning", "Artificial intelligence", "Pattern recognition", "sparse constraint", "sparse model", "Sparse PCA", "sparse coefficient", "dictionary learning" ]
Parent Topic
Child Topic
    No Parent Topic