VIDEO ANNOTATION BY ACTIVE LEARNING AND SEMI-SUPERVISED

2006 
Supervised and semi-supervised learning are frequently applied methods to annotate videos by mapping low-level features into semantic concepts. Due to the large semantic gap, the main constraint of these methods is that the information contained in a limited-size labeled dataset can hardly represent the distributions of the semantic concepts. In this paper, we propose a novel semi-automatic video annotation framework, active learning with semi-supervised ensembling, which tries to tackle the disadvantages of current video annotation solutions. Firstly the initial training set is constructed based on distribution analysis of the entire video dataset. And then an active learning scheme is combined into a semi-supervised ensembling framework, which selects the samples to maximize the margin of the ensemble classifier based on both labeled and unlabeled data. Experimental results show that the proposed method performs superior to general semi-supervised learning algorithms and typical active learning algorithms in terms of annotation accuracy and stability.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    11
    References
    0
    Citations
    NaN
    KQI
    []