Video Codec Forensics Based on Convolutional Neural Networks

2018 
The recent development of multimedia has made video editing accessible to everyone. Unfortunately, forensic analysis tools capable of detecting traces left by video processing operations in a blind fashion are still at their beginnings. One of the reasons is that videos are customary stored and distributed in a compressed format, and codec-related traces tends to mask previous processing operations. In this paper, we propose to capture video codec traces through convolutional neural networks (CNNs) and exploit them as an asset. Specifically, we train two CNN s to extract information about the used video codec and coding quality, respectively. Building upon these CNN s, we propose a system to detect and localize temporal splicing for video sequences generated from the concatenation of different video segments, which are characterized by inconsistent coding schemes and/or parameters (e.g., video compilations from different sources or broadcasting channels). The proposed solution is validated using videos at different resolutions (i.e., CIF, 4CIF, PAL and 720p) encoded with four common codecs (i.e., MPEG2, MPEG4, H264 and H265) at different qualities (i.e., different constant and variable bitrates, as well as constant quantization parameters).
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    21
    References
    7
    Citations
    NaN
    KQI
    []