New Contour Cue-Based Hybrid Sparse Learning for Salient Object Detection

2019 
Saliency detection is a hot topic in recent years and much efforts have been made to address it from different perspectives. However, current saliency models cannot meet the needs for diversified scenes due to their limited generalization capability. To tackle this problem, in this paper, we propose a hybrid saliency model, which can fuse heterogeneous visual cues for robust salient object detection. A new contour cue is first introduced to provide discriminative saliency information for scene description. Its realization is based on a discrete optimization objective and can be solved efficiently with an iterative algorithm. Followed by this, the contour cue is taken as a part of a hybrid sparse learning model, in which cues from different domains can interact and complement with each other for joint saliency fusion. This saliency fusion model is parameter-free and its numerical solution can be obtained using gradient descent methods. Finally, we advance an object proposal-based collaborative filtering strategy to generate high quality saliency maps from the above fusion results. Compared with traditional methods, the proposed saliency model can fuse heterogeneous cues in a unified optimization framework rather than combine them separately. Therefore, it has favorable modeling capability under diversified scenes where the saliency patterns appear quite differently. To verify the effectiveness of the proposed method, we take experiments on four large saliency benchmark datasets and compare it with other 26 state-of-the-art saliency models. Both qualitative and quantitative evaluation results indicate the superiority of our method, especially in challenging situations. Besides, we apply our saliency model to ship detection of radar platforms and promising results are obtained over traditional detectors.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    68
    References
    3
    Citations
    NaN
    KQI
    []