Securing Neural Networks Using Homomorphic Encryption

2021 
Neural networks are becoming increasingly popular within the modern world, and they are often implemented without much consideration of their potential flaws, which makes them vulnerable and are easily being hacked by hackers. One of such vulnerabilities, namely, a backdoor attack is studied in this paper. A backdoor attacked neural network involves inducing unique misclassification rules or patterns as triggers in the neural network such that, upon encountering the trigger, the neural network will only predict the output based upon the misclassification rules, giving the attacker control over the output of the neural network. To prevent such a vulnerability, we propose to employ homomorphic encryption as a solution. Homomorphic Encrypted Data has a special property where certain operations can be performed on encrypted data to in-turn directly perform the operations on the plain-text data itself, without the need of any special mechanism. This ability of homomorphic encryption can be used in conjunction with the vulnerable neural network, to revoke the control of the attacker from the neural network. Thereby, in this paper, we will be securing a vulnerable neural network from backdoor attack using homomorphic encryption.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    7
    References
    0
    Citations
    NaN
    KQI
    []