Multi-scale patch-based sparse appearance model for robust object tracking

2014 
When objects undergo large pose change, illumination variation or partial occlusion, most existing visual tracking algorithms tend to drift away from targets and even fail to track them. To address the issue, in this paper we propose a multi-scale patch-based appearance model with sparse representation and provide an efficient scheme involving the collaboration between multi-scale patches encoded by sparse coefficients. The key idea of our method is to model the appearance of an object by different scale patches, which are represented by sparse coefficients with different scale dictionaries. The model exploits both partial and spatial information of targets based on multi-scale patches. Afterwards, a similarity score of one candidate target is input into a particle filter framework to estimate the target state sequentially over time in visual tracking. Additionally, to decrease the visual drift caused by frequently updating model, we present a novel two-step object tracking method which exploits both the ground truth information of the target labeled in the first frame and the target obtained online with the multi-scale patch information. Experiments on some publicly available benchmarks of video sequences showed that the similarity involving complementary information can locate targets more accurately and the proposed tracker is more robust and effective than others.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    39
    References
    6
    Citations
    NaN
    KQI
    []