Edge Guided Generation Network for Video Prediction

2018 
Video prediction is a challenging problem due to the highly complex variation of video appearance and motions. Traditional methods that directly predict pixel values often result in blurring and artifacts. Furthermore, cumulative errors can lead to a sharp drop of prediction quality in long-term prediction. To alleviate the above problems, we propose a novel edge guided video prediction network, which firstly models the dynamic of frame edges and predicts the future frame edges, then generates the future frames under the guidance of the obtained future frame edges. Specifically, our network consists of two modules that are ConvLSTM based edge prediction module and the edge guided frames generation module. The whole network is differentiable and can be trained end-to-end without any supervision effort. Extensive experiments on KTH human action dataset and challenging autonomous driving KITTI dataset demonstrate that our method achieves better results than state-of-the-art methods especially in long-term video predictions.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    18
    References
    3
    Citations
    NaN
    KQI
    []