Quantized Warping and Residual Temporal Integration for Video Super-Resolution on Fast Motions.

2020 
In recent years, numerous deep learning approaches to video super resolution have been proposed, increasing the resolution of one frame using information found in neighboring frames. Such methods either warp frames into alignment using optical flow, or else forgo warping and use optical flow as an additional network input. In this work we point out the disadvantages inherent in these two approaches and propose one that inherits the best features of both, warping with the integer part of the flow and using the fractional part as network input. Moreover, an iterative residual super-resolution approach is proposed to incrementally improve quality as more neighboring frames are provided. Incorporating the above in a recurrent architecture, we train, evaluate and compare the proposed network to the SotA, and note its superior performance in faster motion sequences.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    26
    References
    0
    Citations
    NaN
    KQI
    []