language-icon Old Web
English
Sign In

Approximation error

The approximation error in some data is the discrepancy between an exact value and some approximation to it. An approximation error can occur because: The approximation error in some data is the discrepancy between an exact value and some approximation to it. An approximation error can occur because: In the mathematical field of numerical analysis, the numerical stability of an algorithm indicates how the error is propagated by the algorithm. One commonly distinguishes between the relative error and the absolute error. Given some value v and its approximation vapprox, the absolute error is where the vertical bars denote the absolute value. If v ≠ 0 , {displaystyle v eq 0,} the relative error is and the percent error is In words, the absolute error is the magnitude of the difference between the exact value and the approximation. The relative error is the absolute error divided by the magnitude of the exact value. The percent error is the relative error expressed in terms of per 100. These definitions can be extended to the case when v {displaystyle v} and v approx {displaystyle v_{ ext{approx}}} are n-dimensional vectors, by replacing the absolute value with an n-norm.

[ "Algorithm", "Mathematical optimization", "Spouge's approximation", "Born–Huang approximation", "Small-angle approximation", "Loss of significance" ]
Parent Topic
Child Topic
    No Parent Topic