Obstructing DeepFakes by Disrupting Face Detection and Facial Landmarks Extraction

2021 
Recent years have seen fast development in synthesizing realistic human faces using AI technologies. AI-synthesized fake faces can be weaponized to cause negative personal and social impact. In this work, we develop technologies to defend individuals from becoming victims of recent AI-synthesized fake videos by sabotaging would-be training data. This is achieved by disrupting deep neural network (DNN)-based face detection and facial landmark extraction method with specially designed imperceptible adversarial perturbations to reduce the quality of the detected faces. We empirically show the effectiveness of our methods in disrupting state-of-the-art DNN-based face detectors and facial landmark extractors on several datasets.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    65
    References
    0
    Citations
    NaN
    KQI
    []