Drift-proof Tracking with Deep Reinforcement Learning

2021 
Object tracking is an essential and challenging sub-domain in the field of computer vision owing to its wide range of applications and complexities of real-life situations. It has been studied extensively over the last decade, leading to the proposal of several tracking frameworks and approaches. Recently, the introduction of reinforcement learning and the Actor-Critic framework has effectively improved the tracking speed of deep learning trackers. However, most existing deep reinforcement learning trackers experience a slight performance degradation mainly owing to the drift issues. Drifts pose a threat to the tracking performance, which may lead to losing the tracked target. Herein, we propose a drift-proof tracker with deep reinforcement learning that aims to improve the tracking performance by counteracting drifts while maintaining its real-time advantage. We utilize a reward function with the Distance-IoU (DIoU) metric to guide the reinforcement learning to alleviate the drifts caused by the trained model. Furthermore, double negative samples (hard negative and drift samples) are constructed in tracking for network initialization, which is followed by calculating the loss by a small error-friendly loss function. Therefore, our tracker can better discriminate between the positive and negative samples and correct the predicted bounding boxes when the drift occurs. Meanwhile, a generative adversarial network is adopted for positive sample augmentation. Extensive experimental results on multiple popular benchmarks show that our algorithm effectively reduces the occurrences of drift and boosts the tracking performance, compared to those of other state-of-the-art trackers.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []