language-icon Old Web
English
Sign In

Least mean squares filter

Least mean squares (LMS) algorithms are a class of adaptive filter used to mimic a desired filter by finding the filter coefficients that relate to producing the least mean square of the error signal (difference between the desired and the actual signal). It is a stochastic gradient descent method in that the filter is only adapted based on the error at the current time. It was invented in 1960 by Stanford University professor Bernard Widrow and his first Ph.D. student, Ted Hoff. The basic idea behind LMS filter is to approach the optimum filter weights ( R − 1 P ) {displaystyle (R^{-1}P)}  , by updating the filter weights in a manner to converge to the optimum filter weight. This is based on the gradient descent algorithm. The algorithm starts by assuming small weights(zero in most cases) and, at each step, by finding the gradient of the mean square error, the weights are updated.That is, if the MSE-gradient is positive, it implies the error would keep increasing positively if the same weight is used for further iterations, which means we need to reduce the weights. In the same way, if the gradient is negative, we need to increase the weights. So, thebasic weight update equation is :The idea behind LMS filters is to use steepest descent to find filter weights h ^ ( n ) {displaystyle {hat {mathbf {h} }}(n)}   which minimize a cost function. We start by defining the cost function as For most systems the expectation function E { x ( n ) e ∗ ( n ) } {displaystyle {E}left{mathbf {x} (n),e^{*}(n) ight}}   must be approximated. This can be done with the following unbiased estimatorThe LMS algorithm for a p {displaystyle p}  th order algorithm can be summarized as x ( n ) = [ x ( n ) , x ( n − 1 ) , … , x ( n − p + 1 ) ] T {displaystyle mathbf {x} (n)=left^{T}}  As the LMS algorithm does not use the exact values of the expectations, the weights would never reach the optimal weights in the absolute sense, but a convergence is possible in mean. That is, even though the weights may change by small amounts, it changes about the optimal weights. However, if the variance with which the weights change, is large, convergence in mean would be misleading. This problem may occur, if the value of step-size μ {displaystyle mu }   is not chosen properly.The main drawback of the 'pure' LMS algorithm is that it is sensitive to the scaling of its input x ( n ) {displaystyle x(n)}  . This makes it very hard (if not impossible) to choose a learning rate μ {displaystyle mu }   that guarantees stability of the algorithm (Haykin 2002). The Normalised least mean squares filter (NLMS) is a variant of the LMS algorithm that solves this problem by normalising with the power of the input. The NLMS algorithm can be summarised as: x ( n ) = [ x ( n ) , x ( n − 1 ) , … , x ( n − p + 1 ) ] T {displaystyle mathbf {x} (n)=left^{T}}  

[ "Convergence (routing)", "Adaptive filter", "Signal", "nonlinear active noise control", "least mean square adaptive algorithm", "excess mean square error", "normalized lms algorithm", "Multidelay block frequency domain adaptive filter" ]
Parent Topic
Child Topic
    No Parent Topic