Weakly-Supervised Visual Instrument-Playing Action Detection in Videos

2019 
Music videos are one of the most popular types of video streaming services, and instrument playing is among the most common scenes in such videos. In order to understand the instrument-playing scenes in the videos, it is important to know what instruments are played, when they are played, and where the playing actions occur in the scene. While audio-based recognition of instruments has been widely studied, the visual aspect of music instrument playing remains largely unaddressed in the literature. One of the main obstacles is the difficulty in collecting annotated data of the action locations for training-based methods. To address this issue, we propose a weakly supervised framework to find when and where the instruments are played in the videos. We propose using two auxiliary models: 1) a sound model and 2) an object model to provide supervision for training the instrument-playing action model. The sound model provides temporal supervisions, while the object model provides spatial supervisions. They together can simultaneously provide temporal and spatial supervisions. The resulting model only needs to analyze the visual part of a music video to deduce which, when, and where instruments are played. We found that the proposed method significantly improves localization accuracy. We evaluate the result of the proposed method temporally and spatially on a small dataset (a total of 5400 frames) that we manually annotated.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    46
    References
    3
    Citations
    NaN
    KQI
    []