Learning Motion-Aware Policies for Robust Visual Tracking

2019 
Visual object tracking aims to locate a moving target specified at the initial frame. Although this task is closely related to the temporal motion information, the motion model typically draws limited attention. In this paper, we propose a motion-aware multi-domain network for robust visual tracking. In our approach, a motion-aware agent is trained via reinforcement learning, which can infer the parameters of the particle filter in a continuous action space. Different from existing tracking-by-detection frameworks that the particle filter merely relies on the previous target state, our motion-aware agent, after receiving the current state, can adaptively change the parameters of the particle filter (e.g., particle location and scale range). As a result, our approach samples high-quality candidates for further classification/tracking, thus can better handle challenges such as fast motion and scale variation. Extensive experiments on large-scale benchmarks verify the effectiveness of our method.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    25
    References
    0
    Citations
    NaN
    KQI
    []