Star-Net: Spatial-Temporal Attention Residual Network for Video Deraining
2021
Learning-based video deraining has recently drawn increasing attention. They tend to directly package aligned frames to input a fully end-to-end network. However, the network is generally object-driven and cannot recognize how to utilize temporal information so that the results are unsatisfied. In this work, we design a novel Spatial-Temporal Attention Network (STAR-Net) to explicitly utilize the temporal information. Concretely, we define the self-spatial attention to characterizing the rain region of the target frame, and the temporal-spatial attention to learn the profitable information for remedying the rain region of the target frame from the adjacent frame. We also introduce a simple residual network to further strengthen the relationship between the target and the adjacent frame. These addressed frames are fused by a three-layers convolutional module to further improve the capability. Extensive evaluations indicate our superiority against state-of-the-art methods.
Keywords:
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
0
References
0
Citations
NaN
KQI