Toward Visual Voice Activity Detection for Unconstrained Videos

2019 
The prevalent audio-based Voice Activity Detection (VAD) systems are challenged by the presence of ambient noise and are sensitive to variations in the type of the noise. The use of information from the visual modality, when available, can help overcome some of the problems of audio-based VAD. Existing visual-VAD systems however do not operate directly on the whole image but require intermediate face detection, face landmark detection and subsequent facial feature extraction from the lip region. In this work we present an end-to-end trainable Hierarchical Context Aware (HiCA) architecture for visual-VAD for videos obtained in unconstrained environments which can be trained with videos as input and audio speech labels as output. The network is designed to account for local and global temporal information in a video sequence. In contrast to existing visual-VAD systems our proposed approach does not rely on face detection and subsequent facial feature extraction. It can obtain a VAD accuracy of 66% on a dataset of Hollywood movie videos just with visual information. Further analysis of the representations learned from our visual-VAD system shows that the network learns to localize on human faces, and sometimes speaking human faces specifically. Our quantitative analysis of the effectiveness of face localization shows that our system performs better than sound-localization networks designed for unconstrained videos.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    25
    References
    11
    Citations
    NaN
    KQI
    []