Efficient Video Grounding With Which-Where Reading Comprehension

2022 
Video grounding aims at localizing the temporal moment related to the given language description, which is very helpful to many cross-modal content understanding applications like visual question answering and sentence-video search. Existing approaches usually directly regress the temporal boundaries of an event described by a query sentence in the video sequence. This direct regression manner often encounters a large decision space due to diverse target events and variable video durations, leading to inaccurate localization as well as inefficient grounding. This paper presents an efficient framework termed from which to where to facilitate video grounding. The core idea is imitating the reading comprehension process to gradually narrow the decision space, in what we decompose the direct regression into two steps. The “which” step first roughly selects a candidate area by evaluating which video segment in the predefined set is closest to the ground truth. To this end, we formulate this step into a multi-choice reading comprehension problem and propose a criterion to select the best-matched segment. In this way, the excessive decision space is effectively reduced. The “where” step aims to precisely regress the temporal boundary of the selected video segment from the shrunk decision space. We thus introduce a triple-span representation for each candidate video segment to use the regional context for better boundary regression. The “which” and “where” steps can be combined into a unified framework and learned end-to-end, leading to an efficient video grounding system. Extensive experiments on Charades-STA, ActivityNet-Captions, and TACoS benchmarks clearly demonstrate the effectiveness of our framework.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    69
    References
    0
    Citations
    NaN
    KQI
    []