Integrating object proposal with attention networks for video saliency detection

2021 
Abstract Video saliency detection is an active research issue in both information science and visual psychology. In this paper, we propose an efficient video saliency-detection model, based on integrating object-proposal with attention networks, for efficiently capturing salient objects and human attention areas in the dynamic scenes of videos. In our algorithm, visual object features are first exploited from individual video frame, using real-time neural networks for object detection. Then, the spatial position information of each frame is used to screen out the large background in the video, so as to reduce the influence of background noises. Finally, the results, with backgrounds removed, are further refined by spreading the visual clues through an adaptive weighting scheme into the later layers of a convolutional neural network. Experimental results, conducted on widespread and commonly used databases for video saliency detection, verify that our proposed framework outperforms existing deep models.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    47
    References
    0
    Citations
    NaN
    KQI
    []