Video Deblurring via Temporally and Spatially Variant Recurrent Neural Network

2020 
The camera shake and high-speed motion of objects often produce a blurry video. However, it is hard to recover sharp videos using existing single or multiple image deblurring methods, as the blur artifacts in blurry videos are both temporally and spatially varying. In this paper, we propose a temporally and spatially variant recurrent neural network for video deblurring, in which both temporally and spatially variants employ ConvGRU blocks and a weight generator to capture spatio-temporal features. Meanwhile, the proposed model is trained in an end-to-end manner, where the model input and output are set to the same number. Thus, our model does not reduce the number of frames both in training and testing stages, which is important in practical applications. Quantitative and qualitative evaluations on standard benchmark datasets demonstrate that the proposed method outperforms the current state-of-the-art methods.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    48
    References
    3
    Citations
    NaN
    KQI
    []