language-icon Old Web
English
Sign In

Compressed sensing

A common goal of the engineering field of signal processing is to reconstruct a signal from a series of sampling measurements. In general, this task is impossible because there is no way to reconstruct a signal during the times that the signal is not measured. Nevertheless, with prior knowledge or assumptions about the signal, it turns out to be possible to perfectly reconstruct a signal from a series of measurements (acquiring this series of measurements is called sampling). Over time, engineers have improved their understanding of which assumptions are practical and how they can be generalized. An early breakthrough in signal processing was the Nyquist–Shannon sampling theorem. It states that if a real signal's highest frequency is less than half of the sampling rate (or less than the sampling rate, if the signal is complex), then the signal can be reconstructed perfectly by means of sinc interpolation. The main idea is that with prior knowledge about constraints on the signal's frequencies, fewer samples are needed to reconstruct the signal. Around 2004, Emmanuel Candès, Justin Romberg, Terence Tao, and David Donoho proved that given knowledge about a signal's sparsity, the signal may be reconstructed with even fewer samples than the sampling theorem requires. This idea is the basis of compressed sensing. Compressed sensing relies on L1 techniques, which several other scientific fields have used historically. In statistics, the least squares method was complemented by the L 1 {displaystyle L^{1}} -norm, which was introduced by Laplace. Following the introduction of linear programming and Dantzig's simplex algorithm, the L 1 {displaystyle L^{1}} -norm was used in computational statistics. In statistical theory, the L 1 {displaystyle L^{1}} -norm was used by George W. Brown and later writers on median-unbiased estimators. It was used by Peter J. Huber and others working on robust statistics. The L 1 {displaystyle L^{1}} -norm was also used in signal processing, for example, in the 1970s, when seismologists constructed images of reflective layers within the earth based on data that did not seem to satisfy the Nyquist–Shannon criterion. It was used in matching pursuit in 1993, the LASSO estimator by Robert Tibshirani in 1996 and basis pursuit in 1998. There were theoretical results describing when these algorithms recovered sparse solutions, but the required type and number of measurements were sub-optimal and subsequently greatly improved by compressed sensing. At first glance, compressed sensing might seem to violate the sampling theorem, because compressed sensing depends on the sparsity of the signal in question and not its highest frequency. This is a misconception, because the sampling theorem guarantees perfect reconstruction given sufficient, not necessary, conditions. A sampling method fundamentally different from classical fixed-rate sampling cannot 'violate' the sampling theorem. Sparse signals with high frequency components can be highly under-sampled using compressed sensing compared to classical fixed-rate sampling. An underdetermined system of linear equations has more unknowns than equations and generally has an infinite number of solutions. The figure below shows such an equation system y = D x {displaystyle mathbf {y} =Dmathbf {x} } where we want to find a solution for x {displaystyle mathbf {x} } .

[ "Algorithm", "Computer vision", "Mathematical optimization", "Artificial intelligence", "Pattern recognition", "compressive imaging", "l1 norm minimization", "Restricted isometry property", "bregman iteration", "Sparse image" ]
Parent Topic
Child Topic
    No Parent Topic