language-icon Old Web
English
Sign In

Sample entropy

Sample entropy (SampEn) is a modification of approximate entropy (ApEn), used for assessing the complexity of physiological time-series signals, diagnosing diseased states. SampEn has two advantages over ApEn: data length independence and a relatively trouble-free implementation. Also, there is a small computational difference: In ApEn, the comparison between the template vector (see below) and the rest of the vectors also includes comparison with itself. This guarantees that probabilities C i ′ m ( r ) {displaystyle C_{i}'^{m}(r)} are never zero. Consequently, it is always possible to take a logarithm of probabilities. Because template comparisons with itself lower ApEn values, the signals are interpreted to be more regular than they actually are. These self-matches are not included in SampEn. Sample entropy (SampEn) is a modification of approximate entropy (ApEn), used for assessing the complexity of physiological time-series signals, diagnosing diseased states. SampEn has two advantages over ApEn: data length independence and a relatively trouble-free implementation. Also, there is a small computational difference: In ApEn, the comparison between the template vector (see below) and the rest of the vectors also includes comparison with itself. This guarantees that probabilities C i ′ m ( r ) {displaystyle C_{i}'^{m}(r)} are never zero. Consequently, it is always possible to take a logarithm of probabilities. Because template comparisons with itself lower ApEn values, the signals are interpreted to be more regular than they actually are. These self-matches are not included in SampEn. There is a multiscale version of SampEn as well, suggested by Costa and others. Like approximate entropy (ApEn), Sample entropy (SampEn) is a measure of complexity. But it does not include self-similar patterns as ApEn does. For a given embedding dimension m {displaystyle m} , tolerance r {displaystyle r} and number of data points N {displaystyle N} , SampEn is the negative logarithm of the probability that if two sets of simultaneous data points of length m {displaystyle m} have distance < r {displaystyle <r} then two sets of simultaneous data points of length m + 1 {displaystyle m+1} also have distance < r {displaystyle <r} . And we represent it by S a m p E n ( m , r , N ) {displaystyle SampEn(m,r,N)} (or by S a m p E n ( m , r , τ , N ) {displaystyle SampEn(m,r, au ,N)} including sampling time τ {displaystyle au } ). Now assume we have a time-series data set of length N = { x 1 , x 2 , x 3 , . . . , x N } {displaystyle N={{x_{1},x_{2},x_{3},...,x_{N}}}} with a constant time interval τ {displaystyle au } . We define a template vector of length m {displaystyle m} , such that X m ( i ) = { x i , x i + 1 , x i + 2 , . . . , x i + m − 1 } {displaystyle X_{m}(i)={{x_{i},x_{i+1},x_{i+2},...,x_{i+m-1}}}} and the distance function d [ X m ( i ) , X m ( j ) ] {displaystyle d} (i≠j) is to be the Chebyshev distance (but it could be any distance function, including Euclidean distance). We define the sample entropy to be

[ "Statistics", "Artificial intelligence", "Nonlinear system", "Pattern recognition", "Entropy (information theory)" ]
Parent Topic
Child Topic
    No Parent Topic