A Stacked Deep MEMC Network for Frame Rate up Conversion and Its Application to HEVC

2020 
Optical flows and video frame interpolation are considered as a chicken-egg problem such that one problem affects the other and vice versa. This paper presents a stack of deep networks to estimate intermediate optical flows from the very first intermediate synthesized frame and later generate the very end interpolated frame by combining the very first one and two learned intermediate optical flows based warped frames. The primary benefit is that it glues two problems into a single comprehensive framework that learns altogether by using both an analysis-by-synthesis technique for optical flow estimation and Convolutional Neural Networks (CNN) kernels-based frame synthesis. The proposed network is the first attempt to merge two previous branches of previous approaches, optical flow-based synthesis and CNN kernels-based synthesis into a comprehensive network. Experiments are carried out with various challenging datasets, all showing that the proposed network outperforms the state-of-the-art methods with significant margins for video frame interpolation and the estimated optical flows are more accurate for challenging movements. Furthermore, the proposed Motion Estimation Motion Compensation (MEMC) network shows its outstanding enhancement of the quality of compressed videos.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    1
    Citations
    NaN
    KQI
    []