Numerical Stability of Algorithms at Extreme Scale and Low Precisions

2021 
The largest dense linear systems that are being solved today are of order $n = 10^7$. Single precision arithmetic, which has a unit roundoff $u \approx 10^{-8}$, is widely used in scientific computing, and half precision arithmetic, with $u \approx 10^{-4}$, is increasingly being exploited as it becomes more readily available in hardware. Standard rounding error bounds for numerical linear algebra algorithms are proportional to $p(n)u$, with $p$ growing at least linearly with $n$. Therefore we are at the stage where these rounding error bounds are not able to guarantee any accuracy or stability in the computed results for some extreme-scale or low-accuracy computations. We explain how rounding error bounds with much smaller constants can be obtained. Blocked algorithms, which break the data into blocks of size $b$, lead to a reduction in the error constants by a factor $b$ or more. Two architectural features also reduce the error constants: extended precision registers and fused multiply--add operations, either at the scalar level or in mixed precision block form. We also discuss a new probabilistic approach to rounding error analysis that provides error constants that are the square roots of those of the worst-case bounds. Combining these different considerations provides new understanding of the numerical stability of extreme scale and low precision computations in numerical linear algebra.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []