Notebook paper: TNO instance search submission 2011

2011 
The TNO instance search submission to TRECVID 2011 consisted of three different runs: one is using an exhaustive keypoint search, one is using a bag-of-visual-words approach and one is using open-source face-recognition software. Our run approaches: Briefly, what approach or combination of approaches did you test in each of your submitted runs? ՠall runs: video decoding using ffmpeg library, sampling every 25th frame. ՠF_X_NO_TNO-SURFAC2_1: standard SURF keypoint detection, exhaustive search. ՠF_X_NO_TNO-BOWCOL_2: standard SURF keypoint detection, bag-of-words using 256 prototypes from queries, indexing and querying using Lemur. ՠF_X_NO_TNO-SUREIG_3: face detection & recognition with Eigenface descriptors. What if any significant differences (in terms of what measures) did you find among the runs? In terms of average precision TNO run 1 significantly outperforms TNO run 2. Runs 1 and 3 differ only at the PERSON queries. Based on the results, can you estimate the relative contribution of each component of your system/approach to its effectiveness? The results of TNO run 3 show that face-recognition software can have a contribution to effectiveness for PERSON queries. Based on the difference between TNO runs 1 and 2 we can estimate that the contribution of choosing exhaustive keypoint search over bag-of-words with a small visual vocabulary build on the query set of images is high. Overall, what did you learn about runs/approaches and the research question(s) that motivated them? What we learned from our runs: using face-recognition software on this dataset can have a contribution if the queries contain large frontal faces. Exhaustive keypoint search significantly outperforms bag-of-words. One can build an image-retrieval system using open-source components.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    5
    References
    2
    Citations
    NaN
    KQI
    []