Differential Privacy in Deep Learning: An Overview

2019 
Nowadays, deep learning has many applications in our daily life such as self-driving, product recommendation, advertisements and healthcare. In the training phase, deep learning models are trained on dataset of which the information privacy is stored locally through model parameters. The information privacy can in some cases be inferred from the parameters of model, and then traced back to uncover sensitive information. The privacy challenges are solved with many different anonymization methods such as k-anonymity, l-diversity, and t-closeness, which may not be sufficient anymore with unstructured data and inference attack. However, we argue that this problem can be solved by differential privacy. Differential privacy provides a mathematical framework that can be used to understand the extent to which a deep learning algorithm remembers information about individuals and be able to evaluate deep learning for privacy guarantees. In this paper, we review the threats and defenses on privacy models in deep learning, especially the differential privacy. We classify threats and defenses, and identify the points in deep learning to add random noises to input samples, gradient or function to protect privacy model.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    34
    References
    9
    Citations
    NaN
    KQI
    []