Real-Time Video Deraining via Global Motion Compensation and Hybrid Multi-Scale Temporal Correlations

2022 
The current video deraining algorithms mainly use adjacent frames to optimize the target frame information. However, they only consider the inter-frame temporal correlations of a uniform scale between frames, ignoring the inter-frame temporal correlations of different scales. In addition, the high computational cost is another drawback of the current video deraining algorithms. To this end, we propose a novel aggregation network that explores the inter-frame multi-scale temporal correlations for video deraining with the small computational cost. First, we construct a hybrid multi-scale feature extraction structure in the network to increase the receptive field of multi-scale features. For similar rain streaks at adjacent frames with different scales, a hybrid multi-scale residual block (HMSRB) is proposed to explore the complementary and redundant information at the temporal dimension to characterize the target frame. At the same time, we introduce an improved global context module (GCM) to avoid the complex motion estimation and motion compensation (ME&MC) operation as in previous video deraining approaches, while reducing the calculation complexity. Finally, a fusion block is utilized to adaptively merge the extracted features. Experiments demonstrate that our proposed network is proved to be more efficient and effective than the existing algorithms.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    26
    References
    0
    Citations
    NaN
    KQI
    []