Asymptotic closed-loop design for transform domain temporal prediction

2015 
Current video coders exploit temporal dependencies via prediction that consists of motion-compensated pixel copying operations. Such per-pixel temporal prediction ignores important underlying spatial correlations, as well as considerable variations in temporal correlation across frequency components. In the transform domain, however, spatial decorrelation is first achieved, allowing for the true temporal correlation at each frequency to emerge and be properly accounted for, with particular impact at high frequencies, whose lower correlation is otherwise masked by the dominant low frequencies. This paper focuses on effective design of transform domain temporal prediction that: i) fully accounts for the effects of sub-pixel interpolation filters, and ii) circumvents the challenge of catastrophic design instability due to quantization error propagation through the prediction loop. We design predictors conditioned on frequency and sub-pixel position, employing an iterative open-loop (hence stable) design procedure that, on convergence, approximates closed-loop operation. Experimental results validate the effectiveness of both the asymptotic closed-loop design procedure and the transform-domain temporal prediction paradigm, with significant and consistent performance gains over the standard.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    15
    References
    4
    Citations
    NaN
    KQI
    []