Self-attention-based Multiscale Feature Learning Optical Flow with Occlusion Feature Map Prediction

2021 
Even though optical flow approaches based on convolutional neural networks have achieved remarkable performance with respect to both accuracy and efficiency, large displacements and motion occlusions remain challenges for most of the existing learning-based models. To address the abovementioned issues, we propose in this paper a self-attention-based multiscale feature learning optical flow computation method with occlusion feature map prediction. First, we exploit a self-attention mechanism-based multiscale feature learning module to compensate for large displacement optical flows, and the presented module is able to capture long-range dependencies from the input frames. Second, we design a simple but effective self-learning module to acquire an occlusion feature map, in which the predicted occlusion map is utilized to correct the optical flow estimation in occluded areas. Third, we explore a hybrid loss function that integrates the photometric and smoothness losses into the classical endpoint error (EPE) based loss to ensure the accuracy and robustness of the presented network. Finally, we compare the proposed method with some state-of-the-art approaches using the MPI-Sintel and KITTI test databases. The experimental results demonstrate that the proposed method achieved a competitive performance with respect to both accuracy and robustness, and it produced the best results than other methods did under large displacements and occlusions.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    1
    Citations
    NaN
    KQI
    []