Simple Approximations for Fast and Secure Deep Learning on Genomic Data
2020
State-of-the-art frameworks for privacy-preserving artificial neural networks often rely on secret sharing to protect sensitive data. Unfortunately, operating on secret shared data complicates a number of non-linear functions that are central to deep learning, such as batch normalization and rectified linear units (ReLUs). We offer simple procedures for approximating these non-linear operations. The approximations we propose significantly reduce the training runtime of a privacy-preserving convolutional neural network (CNN) that we designed to diagnose breast cancer from secret shared gene expression profiles. In just over five minutes of training, our approximation-based privacy-preserving CNN achieves an average test accuracy of 96%. When we apply an exact garbled circuit solution for the ReLU function, we find that the privacy-preserving model requires days of computation to achieve the same level of accuracy. The dramatic improvement in training runtime yielded by our ReLU approximation may prove useful for other medical applications of privacy-preserving neural networks.
Keywords:
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
10
References
0
Citations
NaN
KQI