Differential Privacy Stochastic Gradient Descent with Adaptive Privacy Budget Allocation

2021 
The Stochastic gradient descent algorithm (SGD) is a classical algorithm for model optimization in machine learning. Introducing a differential privacy model to avoid privacy leakages in the optimization iteration process can achieve the balance of training accuracy and data availability. In addition, a fixed number of iterations are chosen in a conventional implementation scheme. At each iteration, parameters are updated with a noisy gradient. However, the privacy budget is mostly split evenly to each iteration without taking into account the difference in the privacy leakage risk under optimal processing. In this paper, we improve the SGD-based algorithms by appropriately allocating the privacy budget for each iteration. Intuitively, the gradient value is inversely proportional to the number of iterations. The closer the parameter is to its optimal objective value, the smaller the gradient is, and hence the gradients need to be measured more accurately. We propose an adaptive “noise reduction” algorithm that can be applied to private SGD-based empirical risk minimization (ERM) algorithms, meets the accuracy constraint simultaneously. We apply our approach to the backpropagation (BP) neural network. In the experiment, we show and validate that the proposed noise parameter configuration method provides sufficient privacy protection and improves the accuracy of data utility.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    13
    References
    0
    Citations
    NaN
    KQI
    []